Test Report: KVM_Linux_crio 20091

                    
                      6f6ff76044c36bcb4277257fa9dc7e7f34dfce32:2024-12-16:37513
                    
                

Test fail (22/314)

Order failed test Duration
36 TestAddons/parallel/Ingress 152.48
242 TestPreload 285.31
250 TestKubernetesUpgrade 455.6
268 TestPause/serial/SecondStartNoReconfiguration 50.34
286 TestStartStop/group/old-k8s-version/serial/FirstStart 296.21
294 TestStartStop/group/embed-certs/serial/Stop 139.28
299 TestStartStop/group/no-preload/serial/Stop 139.01
302 TestStartStop/group/default-k8s-diff-port/serial/Stop 138.99
303 TestStartStop/group/old-k8s-version/serial/DeployApp 0.53
304 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 109.16
305 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
307 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
313 TestStartStop/group/old-k8s-version/serial/SecondStart 753.7
314 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.35
315 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.32
316 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.35
317 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.49
318 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 473.43
319 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 364.8
320 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 356.41
321 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 93.92
x
+
TestAddons/parallel/Ingress (152.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-618388 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-618388 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-618388 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [004073d4-980e-4fd9-ad94-dc4598f84218] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [004073d4-980e-4fd9-ad94-dc4598f84218] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004517061s
I1216 19:38:00.184877   14254 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-618388 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-618388 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.029882465s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-618388 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-618388 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-618388 -n addons-618388
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-618388 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-618388 logs -n 25: (1.534637276s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-654038                                                                     | download-only-654038 | jenkins | v1.34.0 | 16 Dec 24 19:34 UTC | 16 Dec 24 19:34 UTC |
	| delete  | -p download-only-646102                                                                     | download-only-646102 | jenkins | v1.34.0 | 16 Dec 24 19:34 UTC | 16 Dec 24 19:34 UTC |
	| delete  | -p download-only-654038                                                                     | download-only-654038 | jenkins | v1.34.0 | 16 Dec 24 19:34 UTC | 16 Dec 24 19:34 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-010223 | jenkins | v1.34.0 | 16 Dec 24 19:34 UTC |                     |
	|         | binary-mirror-010223                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:42673                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-010223                                                                     | binary-mirror-010223 | jenkins | v1.34.0 | 16 Dec 24 19:34 UTC | 16 Dec 24 19:34 UTC |
	| addons  | enable dashboard -p                                                                         | addons-618388        | jenkins | v1.34.0 | 16 Dec 24 19:34 UTC |                     |
	|         | addons-618388                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-618388        | jenkins | v1.34.0 | 16 Dec 24 19:34 UTC |                     |
	|         | addons-618388                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-618388 --wait=true                                                                | addons-618388        | jenkins | v1.34.0 | 16 Dec 24 19:34 UTC | 16 Dec 24 19:37 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-618388 addons disable                                                                | addons-618388        | jenkins | v1.34.0 | 16 Dec 24 19:37 UTC | 16 Dec 24 19:37 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-618388 addons disable                                                                | addons-618388        | jenkins | v1.34.0 | 16 Dec 24 19:37 UTC | 16 Dec 24 19:37 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-618388        | jenkins | v1.34.0 | 16 Dec 24 19:37 UTC | 16 Dec 24 19:37 UTC |
	|         | -p addons-618388                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-618388 addons disable                                                                | addons-618388        | jenkins | v1.34.0 | 16 Dec 24 19:37 UTC | 16 Dec 24 19:37 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-618388 addons                                                                        | addons-618388        | jenkins | v1.34.0 | 16 Dec 24 19:37 UTC | 16 Dec 24 19:37 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-618388 addons disable                                                                | addons-618388        | jenkins | v1.34.0 | 16 Dec 24 19:37 UTC | 16 Dec 24 19:37 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-618388 ip                                                                            | addons-618388        | jenkins | v1.34.0 | 16 Dec 24 19:37 UTC | 16 Dec 24 19:37 UTC |
	| addons  | addons-618388 addons                                                                        | addons-618388        | jenkins | v1.34.0 | 16 Dec 24 19:37 UTC | 16 Dec 24 19:37 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-618388 addons disable                                                                | addons-618388        | jenkins | v1.34.0 | 16 Dec 24 19:37 UTC | 16 Dec 24 19:37 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-618388 addons                                                                        | addons-618388        | jenkins | v1.34.0 | 16 Dec 24 19:37 UTC | 16 Dec 24 19:38 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-618388 addons                                                                        | addons-618388        | jenkins | v1.34.0 | 16 Dec 24 19:37 UTC | 16 Dec 24 19:37 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-618388 ssh curl -s                                                                   | addons-618388        | jenkins | v1.34.0 | 16 Dec 24 19:38 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ssh     | addons-618388 ssh cat                                                                       | addons-618388        | jenkins | v1.34.0 | 16 Dec 24 19:38 UTC | 16 Dec 24 19:38 UTC |
	|         | /opt/local-path-provisioner/pvc-4e008b7b-de06-41f9-8097-3d4fc784c52a_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-618388 addons disable                                                                | addons-618388        | jenkins | v1.34.0 | 16 Dec 24 19:38 UTC | 16 Dec 24 19:38 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-618388 addons                                                                        | addons-618388        | jenkins | v1.34.0 | 16 Dec 24 19:38 UTC | 16 Dec 24 19:38 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-618388 addons                                                                        | addons-618388        | jenkins | v1.34.0 | 16 Dec 24 19:38 UTC | 16 Dec 24 19:38 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-618388 ip                                                                            | addons-618388        | jenkins | v1.34.0 | 16 Dec 24 19:40 UTC | 16 Dec 24 19:40 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 19:34:56
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 19:34:56.870008   14891 out.go:345] Setting OutFile to fd 1 ...
	I1216 19:34:56.870269   14891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 19:34:56.870284   14891 out.go:358] Setting ErrFile to fd 2...
	I1216 19:34:56.870292   14891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 19:34:56.870503   14891 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-7083/.minikube/bin
	I1216 19:34:56.871277   14891 out.go:352] Setting JSON to false
	I1216 19:34:56.872245   14891 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1042,"bootTime":1734376655,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 19:34:56.872367   14891 start.go:139] virtualization: kvm guest
	I1216 19:34:56.874566   14891 out.go:177] * [addons-618388] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1216 19:34:56.875833   14891 notify.go:220] Checking for updates...
	I1216 19:34:56.875844   14891 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 19:34:56.877154   14891 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 19:34:56.878487   14891 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 19:34:56.879708   14891 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-7083/.minikube
	I1216 19:34:56.881065   14891 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 19:34:56.882340   14891 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 19:34:56.883827   14891 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 19:34:56.916064   14891 out.go:177] * Using the kvm2 driver based on user configuration
	I1216 19:34:56.917375   14891 start.go:297] selected driver: kvm2
	I1216 19:34:56.917401   14891 start.go:901] validating driver "kvm2" against <nil>
	I1216 19:34:56.917418   14891 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 19:34:56.918454   14891 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 19:34:56.918658   14891 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20091-7083/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1216 19:34:56.933253   14891 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1216 19:34:56.933299   14891 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 19:34:56.933529   14891 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 19:34:56.933557   14891 cni.go:84] Creating CNI manager for ""
	I1216 19:34:56.933595   14891 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 19:34:56.933602   14891 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 19:34:56.933645   14891 start.go:340] cluster config:
	{Name:addons-618388 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:addons-618388 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 19:34:56.933723   14891 iso.go:125] acquiring lock: {Name:mk60ed2ba7ed00047edacd09f4f6bf84214f0831 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 19:34:56.935487   14891 out.go:177] * Starting "addons-618388" primary control-plane node in "addons-618388" cluster
	I1216 19:34:56.936818   14891 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 19:34:56.936857   14891 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1216 19:34:56.936865   14891 cache.go:56] Caching tarball of preloaded images
	I1216 19:34:56.936936   14891 preload.go:172] Found /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 19:34:56.936947   14891 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1216 19:34:56.937260   14891 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/config.json ...
	I1216 19:34:56.937284   14891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/config.json: {Name:mk1d5f6df4bb14319daf632ba585b1ab53139758 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 19:34:56.937413   14891 start.go:360] acquireMachinesLock for addons-618388: {Name:mk014ce1133f8d018fee1f78c9c31a354da6dd77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 19:34:56.937457   14891 start.go:364] duration metric: took 30.667µs to acquireMachinesLock for "addons-618388"
	I1216 19:34:56.937473   14891 start.go:93] Provisioning new machine with config: &{Name:addons-618388 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.32.0 ClusterName:addons-618388 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 19:34:56.937527   14891 start.go:125] createHost starting for "" (driver="kvm2")
	I1216 19:34:56.939314   14891 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1216 19:34:56.939518   14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:34:56.939573   14891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:34:56.953735   14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33843
	I1216 19:34:56.954175   14891 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:34:56.954796   14891 main.go:141] libmachine: Using API Version  1
	I1216 19:34:56.954811   14891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:34:56.955129   14891 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:34:56.955313   14891 main.go:141] libmachine: (addons-618388) Calling .GetMachineName
	I1216 19:34:56.955485   14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
	I1216 19:34:56.955641   14891 start.go:159] libmachine.API.Create for "addons-618388" (driver="kvm2")
	I1216 19:34:56.955679   14891 client.go:168] LocalClient.Create starting
	I1216 19:34:56.955722   14891 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem
	I1216 19:34:57.176207   14891 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem
	I1216 19:34:57.439058   14891 main.go:141] libmachine: Running pre-create checks...
	I1216 19:34:57.439100   14891 main.go:141] libmachine: (addons-618388) Calling .PreCreateCheck
	I1216 19:34:57.439677   14891 main.go:141] libmachine: (addons-618388) Calling .GetConfigRaw
	I1216 19:34:57.440093   14891 main.go:141] libmachine: Creating machine...
	I1216 19:34:57.440109   14891 main.go:141] libmachine: (addons-618388) Calling .Create
	I1216 19:34:57.440267   14891 main.go:141] libmachine: (addons-618388) creating KVM machine...
	I1216 19:34:57.440283   14891 main.go:141] libmachine: (addons-618388) creating network...
	I1216 19:34:57.441565   14891 main.go:141] libmachine: (addons-618388) DBG | found existing default KVM network
	I1216 19:34:57.442237   14891 main.go:141] libmachine: (addons-618388) DBG | I1216 19:34:57.442096   14914 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001231f0}
	I1216 19:34:57.442278   14891 main.go:141] libmachine: (addons-618388) DBG | created network xml: 
	I1216 19:34:57.442301   14891 main.go:141] libmachine: (addons-618388) DBG | <network>
	I1216 19:34:57.442311   14891 main.go:141] libmachine: (addons-618388) DBG |   <name>mk-addons-618388</name>
	I1216 19:34:57.442318   14891 main.go:141] libmachine: (addons-618388) DBG |   <dns enable='no'/>
	I1216 19:34:57.442323   14891 main.go:141] libmachine: (addons-618388) DBG |   
	I1216 19:34:57.442333   14891 main.go:141] libmachine: (addons-618388) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1216 19:34:57.442346   14891 main.go:141] libmachine: (addons-618388) DBG |     <dhcp>
	I1216 19:34:57.442356   14891 main.go:141] libmachine: (addons-618388) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1216 19:34:57.442369   14891 main.go:141] libmachine: (addons-618388) DBG |     </dhcp>
	I1216 19:34:57.442375   14891 main.go:141] libmachine: (addons-618388) DBG |   </ip>
	I1216 19:34:57.442383   14891 main.go:141] libmachine: (addons-618388) DBG |   
	I1216 19:34:57.442395   14891 main.go:141] libmachine: (addons-618388) DBG | </network>
	I1216 19:34:57.442430   14891 main.go:141] libmachine: (addons-618388) DBG | 
	I1216 19:34:57.447723   14891 main.go:141] libmachine: (addons-618388) DBG | trying to create private KVM network mk-addons-618388 192.168.39.0/24...
	I1216 19:34:57.511927   14891 main.go:141] libmachine: (addons-618388) DBG | private KVM network mk-addons-618388 192.168.39.0/24 created
	I1216 19:34:57.511968   14891 main.go:141] libmachine: (addons-618388) setting up store path in /home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388 ...
	I1216 19:34:57.511989   14891 main.go:141] libmachine: (addons-618388) DBG | I1216 19:34:57.511898   14914 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20091-7083/.minikube
	I1216 19:34:57.512013   14891 main.go:141] libmachine: (addons-618388) building disk image from file:///home/jenkins/minikube-integration/20091-7083/.minikube/cache/iso/amd64/minikube-v1.34.0-1734029574-20090-amd64.iso
	I1216 19:34:57.512032   14891 main.go:141] libmachine: (addons-618388) Downloading /home/jenkins/minikube-integration/20091-7083/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20091-7083/.minikube/cache/iso/amd64/minikube-v1.34.0-1734029574-20090-amd64.iso...
	I1216 19:34:57.773312   14891 main.go:141] libmachine: (addons-618388) DBG | I1216 19:34:57.773193   14914 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa...
	I1216 19:34:58.178894   14891 main.go:141] libmachine: (addons-618388) DBG | I1216 19:34:58.178744   14914 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/addons-618388.rawdisk...
	I1216 19:34:58.178916   14891 main.go:141] libmachine: (addons-618388) DBG | Writing magic tar header
	I1216 19:34:58.178925   14891 main.go:141] libmachine: (addons-618388) DBG | Writing SSH key tar header
	I1216 19:34:58.178933   14891 main.go:141] libmachine: (addons-618388) DBG | I1216 19:34:58.178855   14914 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388 ...
	I1216 19:34:58.178943   14891 main.go:141] libmachine: (addons-618388) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388
	I1216 19:34:58.178952   14891 main.go:141] libmachine: (addons-618388) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20091-7083/.minikube/machines
	I1216 19:34:58.178960   14891 main.go:141] libmachine: (addons-618388) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20091-7083/.minikube
	I1216 19:34:58.178966   14891 main.go:141] libmachine: (addons-618388) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20091-7083
	I1216 19:34:58.178975   14891 main.go:141] libmachine: (addons-618388) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1216 19:34:58.178992   14891 main.go:141] libmachine: (addons-618388) DBG | checking permissions on dir: /home/jenkins
	I1216 19:34:58.178997   14891 main.go:141] libmachine: (addons-618388) DBG | checking permissions on dir: /home
	I1216 19:34:58.179008   14891 main.go:141] libmachine: (addons-618388) DBG | skipping /home - not owner
	I1216 19:34:58.179033   14891 main.go:141] libmachine: (addons-618388) setting executable bit set on /home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388 (perms=drwx------)
	I1216 19:34:58.179068   14891 main.go:141] libmachine: (addons-618388) setting executable bit set on /home/jenkins/minikube-integration/20091-7083/.minikube/machines (perms=drwxr-xr-x)
	I1216 19:34:58.179083   14891 main.go:141] libmachine: (addons-618388) setting executable bit set on /home/jenkins/minikube-integration/20091-7083/.minikube (perms=drwxr-xr-x)
	I1216 19:34:58.179091   14891 main.go:141] libmachine: (addons-618388) setting executable bit set on /home/jenkins/minikube-integration/20091-7083 (perms=drwxrwxr-x)
	I1216 19:34:58.179097   14891 main.go:141] libmachine: (addons-618388) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1216 19:34:58.179103   14891 main.go:141] libmachine: (addons-618388) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1216 19:34:58.179107   14891 main.go:141] libmachine: (addons-618388) creating domain...
	I1216 19:34:58.180072   14891 main.go:141] libmachine: (addons-618388) define libvirt domain using xml: 
	I1216 19:34:58.180094   14891 main.go:141] libmachine: (addons-618388) <domain type='kvm'>
	I1216 19:34:58.180104   14891 main.go:141] libmachine: (addons-618388)   <name>addons-618388</name>
	I1216 19:34:58.180110   14891 main.go:141] libmachine: (addons-618388)   <memory unit='MiB'>4000</memory>
	I1216 19:34:58.180127   14891 main.go:141] libmachine: (addons-618388)   <vcpu>2</vcpu>
	I1216 19:34:58.180135   14891 main.go:141] libmachine: (addons-618388)   <features>
	I1216 19:34:58.180143   14891 main.go:141] libmachine: (addons-618388)     <acpi/>
	I1216 19:34:58.180150   14891 main.go:141] libmachine: (addons-618388)     <apic/>
	I1216 19:34:58.180159   14891 main.go:141] libmachine: (addons-618388)     <pae/>
	I1216 19:34:58.180166   14891 main.go:141] libmachine: (addons-618388)     
	I1216 19:34:58.180174   14891 main.go:141] libmachine: (addons-618388)   </features>
	I1216 19:34:58.180186   14891 main.go:141] libmachine: (addons-618388)   <cpu mode='host-passthrough'>
	I1216 19:34:58.180198   14891 main.go:141] libmachine: (addons-618388)   
	I1216 19:34:58.180213   14891 main.go:141] libmachine: (addons-618388)   </cpu>
	I1216 19:34:58.180222   14891 main.go:141] libmachine: (addons-618388)   <os>
	I1216 19:34:58.180230   14891 main.go:141] libmachine: (addons-618388)     <type>hvm</type>
	I1216 19:34:58.180239   14891 main.go:141] libmachine: (addons-618388)     <boot dev='cdrom'/>
	I1216 19:34:58.180250   14891 main.go:141] libmachine: (addons-618388)     <boot dev='hd'/>
	I1216 19:34:58.180258   14891 main.go:141] libmachine: (addons-618388)     <bootmenu enable='no'/>
	I1216 19:34:58.180270   14891 main.go:141] libmachine: (addons-618388)   </os>
	I1216 19:34:58.180283   14891 main.go:141] libmachine: (addons-618388)   <devices>
	I1216 19:34:58.180297   14891 main.go:141] libmachine: (addons-618388)     <disk type='file' device='cdrom'>
	I1216 19:34:58.180323   14891 main.go:141] libmachine: (addons-618388)       <source file='/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/boot2docker.iso'/>
	I1216 19:34:58.180341   14891 main.go:141] libmachine: (addons-618388)       <target dev='hdc' bus='scsi'/>
	I1216 19:34:58.180349   14891 main.go:141] libmachine: (addons-618388)       <readonly/>
	I1216 19:34:58.180358   14891 main.go:141] libmachine: (addons-618388)     </disk>
	I1216 19:34:58.180404   14891 main.go:141] libmachine: (addons-618388)     <disk type='file' device='disk'>
	I1216 19:34:58.180431   14891 main.go:141] libmachine: (addons-618388)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1216 19:34:58.180443   14891 main.go:141] libmachine: (addons-618388)       <source file='/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/addons-618388.rawdisk'/>
	I1216 19:34:58.180451   14891 main.go:141] libmachine: (addons-618388)       <target dev='hda' bus='virtio'/>
	I1216 19:34:58.180457   14891 main.go:141] libmachine: (addons-618388)     </disk>
	I1216 19:34:58.180464   14891 main.go:141] libmachine: (addons-618388)     <interface type='network'>
	I1216 19:34:58.180470   14891 main.go:141] libmachine: (addons-618388)       <source network='mk-addons-618388'/>
	I1216 19:34:58.180477   14891 main.go:141] libmachine: (addons-618388)       <model type='virtio'/>
	I1216 19:34:58.180481   14891 main.go:141] libmachine: (addons-618388)     </interface>
	I1216 19:34:58.180488   14891 main.go:141] libmachine: (addons-618388)     <interface type='network'>
	I1216 19:34:58.180493   14891 main.go:141] libmachine: (addons-618388)       <source network='default'/>
	I1216 19:34:58.180500   14891 main.go:141] libmachine: (addons-618388)       <model type='virtio'/>
	I1216 19:34:58.180534   14891 main.go:141] libmachine: (addons-618388)     </interface>
	I1216 19:34:58.180551   14891 main.go:141] libmachine: (addons-618388)     <serial type='pty'>
	I1216 19:34:58.180558   14891 main.go:141] libmachine: (addons-618388)       <target port='0'/>
	I1216 19:34:58.180564   14891 main.go:141] libmachine: (addons-618388)     </serial>
	I1216 19:34:58.180576   14891 main.go:141] libmachine: (addons-618388)     <console type='pty'>
	I1216 19:34:58.180584   14891 main.go:141] libmachine: (addons-618388)       <target type='serial' port='0'/>
	I1216 19:34:58.180589   14891 main.go:141] libmachine: (addons-618388)     </console>
	I1216 19:34:58.180594   14891 main.go:141] libmachine: (addons-618388)     <rng model='virtio'>
	I1216 19:34:58.180600   14891 main.go:141] libmachine: (addons-618388)       <backend model='random'>/dev/random</backend>
	I1216 19:34:58.180606   14891 main.go:141] libmachine: (addons-618388)     </rng>
	I1216 19:34:58.180611   14891 main.go:141] libmachine: (addons-618388)     
	I1216 19:34:58.180621   14891 main.go:141] libmachine: (addons-618388)     
	I1216 19:34:58.180632   14891 main.go:141] libmachine: (addons-618388)   </devices>
	I1216 19:34:58.180639   14891 main.go:141] libmachine: (addons-618388) </domain>
	I1216 19:34:58.180647   14891 main.go:141] libmachine: (addons-618388) 
	I1216 19:34:58.186519   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:38:4c:89 in network default
	I1216 19:34:58.187149   14891 main.go:141] libmachine: (addons-618388) starting domain...
	I1216 19:34:58.187163   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:34:58.187168   14891 main.go:141] libmachine: (addons-618388) ensuring networks are active...
	I1216 19:34:58.187829   14891 main.go:141] libmachine: (addons-618388) Ensuring network default is active
	I1216 19:34:58.188125   14891 main.go:141] libmachine: (addons-618388) Ensuring network mk-addons-618388 is active
	I1216 19:34:58.188618   14891 main.go:141] libmachine: (addons-618388) getting domain XML...
	I1216 19:34:58.189252   14891 main.go:141] libmachine: (addons-618388) creating domain...
	I1216 19:34:59.611349   14891 main.go:141] libmachine: (addons-618388) waiting for IP...
	I1216 19:34:59.612100   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:34:59.612509   14891 main.go:141] libmachine: (addons-618388) DBG | unable to find current IP address of domain addons-618388 in network mk-addons-618388
	I1216 19:34:59.612563   14891 main.go:141] libmachine: (addons-618388) DBG | I1216 19:34:59.612505   14914 retry.go:31] will retry after 260.418297ms: waiting for domain to come up
	I1216 19:34:59.875034   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:34:59.875546   14891 main.go:141] libmachine: (addons-618388) DBG | unable to find current IP address of domain addons-618388 in network mk-addons-618388
	I1216 19:34:59.875577   14891 main.go:141] libmachine: (addons-618388) DBG | I1216 19:34:59.875496   14914 retry.go:31] will retry after 293.540026ms: waiting for domain to come up
	I1216 19:35:00.171153   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:00.171578   14891 main.go:141] libmachine: (addons-618388) DBG | unable to find current IP address of domain addons-618388 in network mk-addons-618388
	I1216 19:35:00.171622   14891 main.go:141] libmachine: (addons-618388) DBG | I1216 19:35:00.171567   14914 retry.go:31] will retry after 302.02571ms: waiting for domain to come up
	I1216 19:35:00.474954   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:00.475449   14891 main.go:141] libmachine: (addons-618388) DBG | unable to find current IP address of domain addons-618388 in network mk-addons-618388
	I1216 19:35:00.475482   14891 main.go:141] libmachine: (addons-618388) DBG | I1216 19:35:00.475429   14914 retry.go:31] will retry after 385.529875ms: waiting for domain to come up
	I1216 19:35:00.863267   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:00.863723   14891 main.go:141] libmachine: (addons-618388) DBG | unable to find current IP address of domain addons-618388 in network mk-addons-618388
	I1216 19:35:00.863767   14891 main.go:141] libmachine: (addons-618388) DBG | I1216 19:35:00.863717   14914 retry.go:31] will retry after 640.272037ms: waiting for domain to come up
	I1216 19:35:01.505404   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:01.505803   14891 main.go:141] libmachine: (addons-618388) DBG | unable to find current IP address of domain addons-618388 in network mk-addons-618388
	I1216 19:35:01.505837   14891 main.go:141] libmachine: (addons-618388) DBG | I1216 19:35:01.505774   14914 retry.go:31] will retry after 721.536466ms: waiting for domain to come up
	I1216 19:35:02.229456   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:02.230068   14891 main.go:141] libmachine: (addons-618388) DBG | unable to find current IP address of domain addons-618388 in network mk-addons-618388
	I1216 19:35:02.230098   14891 main.go:141] libmachine: (addons-618388) DBG | I1216 19:35:02.229987   14914 retry.go:31] will retry after 1.102160447s: waiting for domain to come up
	I1216 19:35:03.334077   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:03.334523   14891 main.go:141] libmachine: (addons-618388) DBG | unable to find current IP address of domain addons-618388 in network mk-addons-618388
	I1216 19:35:03.334550   14891 main.go:141] libmachine: (addons-618388) DBG | I1216 19:35:03.334486   14914 retry.go:31] will retry after 1.363083549s: waiting for domain to come up
	I1216 19:35:04.699456   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:04.699858   14891 main.go:141] libmachine: (addons-618388) DBG | unable to find current IP address of domain addons-618388 in network mk-addons-618388
	I1216 19:35:04.699880   14891 main.go:141] libmachine: (addons-618388) DBG | I1216 19:35:04.699834   14914 retry.go:31] will retry after 1.800012159s: waiting for domain to come up
	I1216 19:35:06.501712   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:06.502102   14891 main.go:141] libmachine: (addons-618388) DBG | unable to find current IP address of domain addons-618388 in network mk-addons-618388
	I1216 19:35:06.502129   14891 main.go:141] libmachine: (addons-618388) DBG | I1216 19:35:06.502082   14914 retry.go:31] will retry after 2.251346298s: waiting for domain to come up
	I1216 19:35:08.755787   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:08.756226   14891 main.go:141] libmachine: (addons-618388) DBG | unable to find current IP address of domain addons-618388 in network mk-addons-618388
	I1216 19:35:08.756258   14891 main.go:141] libmachine: (addons-618388) DBG | I1216 19:35:08.756200   14914 retry.go:31] will retry after 1.964356479s: waiting for domain to come up
	I1216 19:35:10.722091   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:10.722561   14891 main.go:141] libmachine: (addons-618388) DBG | unable to find current IP address of domain addons-618388 in network mk-addons-618388
	I1216 19:35:10.722589   14891 main.go:141] libmachine: (addons-618388) DBG | I1216 19:35:10.722540   14914 retry.go:31] will retry after 2.999608213s: waiting for domain to come up
	I1216 19:35:13.724350   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:13.724852   14891 main.go:141] libmachine: (addons-618388) DBG | unable to find current IP address of domain addons-618388 in network mk-addons-618388
	I1216 19:35:13.724876   14891 main.go:141] libmachine: (addons-618388) DBG | I1216 19:35:13.724797   14914 retry.go:31] will retry after 2.776458394s: waiting for domain to come up
	I1216 19:35:16.504723   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:16.505155   14891 main.go:141] libmachine: (addons-618388) DBG | unable to find current IP address of domain addons-618388 in network mk-addons-618388
	I1216 19:35:16.505173   14891 main.go:141] libmachine: (addons-618388) DBG | I1216 19:35:16.505131   14914 retry.go:31] will retry after 3.91215948s: waiting for domain to come up
	I1216 19:35:20.421905   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:20.422432   14891 main.go:141] libmachine: (addons-618388) found domain IP: 192.168.39.82
	I1216 19:35:20.422465   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has current primary IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:20.422473   14891 main.go:141] libmachine: (addons-618388) reserving static IP address...
	I1216 19:35:20.422882   14891 main.go:141] libmachine: (addons-618388) DBG | unable to find host DHCP lease matching {name: "addons-618388", mac: "52:54:00:3b:31:2c", ip: "192.168.39.82"} in network mk-addons-618388
	I1216 19:35:20.500737   14891 main.go:141] libmachine: (addons-618388) DBG | Getting to WaitForSSH function...
	I1216 19:35:20.500774   14891 main.go:141] libmachine: (addons-618388) reserved static IP address 192.168.39.82 for domain addons-618388
	I1216 19:35:20.500786   14891 main.go:141] libmachine: (addons-618388) waiting for SSH...
	I1216 19:35:20.503156   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:20.503484   14891 main.go:141] libmachine: (addons-618388) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388
	I1216 19:35:20.503516   14891 main.go:141] libmachine: (addons-618388) DBG | unable to find defined IP address of network mk-addons-618388 interface with MAC address 52:54:00:3b:31:2c
	I1216 19:35:20.503742   14891 main.go:141] libmachine: (addons-618388) DBG | Using SSH client type: external
	I1216 19:35:20.503763   14891 main.go:141] libmachine: (addons-618388) DBG | Using SSH private key: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa (-rw-------)
	I1216 19:35:20.503913   14891 main.go:141] libmachine: (addons-618388) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1216 19:35:20.503935   14891 main.go:141] libmachine: (addons-618388) DBG | About to run SSH command:
	I1216 19:35:20.503947   14891 main.go:141] libmachine: (addons-618388) DBG | exit 0
	I1216 19:35:20.515809   14891 main.go:141] libmachine: (addons-618388) DBG | SSH cmd err, output: exit status 255: 
	I1216 19:35:20.515834   14891 main.go:141] libmachine: (addons-618388) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1216 19:35:20.515841   14891 main.go:141] libmachine: (addons-618388) DBG | command : exit 0
	I1216 19:35:20.515851   14891 main.go:141] libmachine: (addons-618388) DBG | err     : exit status 255
	I1216 19:35:20.515859   14891 main.go:141] libmachine: (addons-618388) DBG | output  : 
	I1216 19:35:23.517550   14891 main.go:141] libmachine: (addons-618388) DBG | Getting to WaitForSSH function...
	I1216 19:35:23.520087   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:23.520478   14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
	I1216 19:35:23.520507   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:23.520686   14891 main.go:141] libmachine: (addons-618388) DBG | Using SSH client type: external
	I1216 19:35:23.520710   14891 main.go:141] libmachine: (addons-618388) DBG | Using SSH private key: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa (-rw-------)
	I1216 19:35:23.520738   14891 main.go:141] libmachine: (addons-618388) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.82 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1216 19:35:23.520748   14891 main.go:141] libmachine: (addons-618388) DBG | About to run SSH command:
	I1216 19:35:23.520763   14891 main.go:141] libmachine: (addons-618388) DBG | exit 0
	I1216 19:35:23.643289   14891 main.go:141] libmachine: (addons-618388) DBG | SSH cmd err, output: <nil>: 
	I1216 19:35:23.643527   14891 main.go:141] libmachine: (addons-618388) KVM machine creation complete
	I1216 19:35:23.644026   14891 main.go:141] libmachine: (addons-618388) Calling .GetConfigRaw
	I1216 19:35:23.644581   14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
	I1216 19:35:23.644808   14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
	I1216 19:35:23.644951   14891 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1216 19:35:23.644965   14891 main.go:141] libmachine: (addons-618388) Calling .GetState
	I1216 19:35:23.646368   14891 main.go:141] libmachine: Detecting operating system of created instance...
	I1216 19:35:23.646382   14891 main.go:141] libmachine: Waiting for SSH to be available...
	I1216 19:35:23.646387   14891 main.go:141] libmachine: Getting to WaitForSSH function...
	I1216 19:35:23.646392   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
	I1216 19:35:23.648635   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:23.648998   14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
	I1216 19:35:23.649015   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:23.649156   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
	I1216 19:35:23.649294   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
	I1216 19:35:23.649430   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
	I1216 19:35:23.649528   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
	I1216 19:35:23.649748   14891 main.go:141] libmachine: Using SSH client type: native
	I1216 19:35:23.649928   14891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1216 19:35:23.649938   14891 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1216 19:35:23.754933   14891 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 19:35:23.754963   14891 main.go:141] libmachine: Detecting the provisioner...
	I1216 19:35:23.754973   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
	I1216 19:35:23.758121   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:23.758463   14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
	I1216 19:35:23.758494   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:23.758680   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
	I1216 19:35:23.758975   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
	I1216 19:35:23.759196   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
	I1216 19:35:23.759407   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
	I1216 19:35:23.759607   14891 main.go:141] libmachine: Using SSH client type: native
	I1216 19:35:23.759788   14891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1216 19:35:23.759801   14891 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1216 19:35:23.860602   14891 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1216 19:35:23.860652   14891 main.go:141] libmachine: found compatible host: buildroot
	I1216 19:35:23.860661   14891 main.go:141] libmachine: Provisioning with buildroot...
	I1216 19:35:23.860669   14891 main.go:141] libmachine: (addons-618388) Calling .GetMachineName
	I1216 19:35:23.860903   14891 buildroot.go:166] provisioning hostname "addons-618388"
	I1216 19:35:23.860928   14891 main.go:141] libmachine: (addons-618388) Calling .GetMachineName
	I1216 19:35:23.861118   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
	I1216 19:35:23.863908   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:23.864296   14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
	I1216 19:35:23.864320   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:23.864457   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
	I1216 19:35:23.864647   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
	I1216 19:35:23.864834   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
	I1216 19:35:23.864976   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
	I1216 19:35:23.865186   14891 main.go:141] libmachine: Using SSH client type: native
	I1216 19:35:23.865399   14891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1216 19:35:23.865419   14891 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-618388 && echo "addons-618388" | sudo tee /etc/hostname
	I1216 19:35:23.984619   14891 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-618388
	
	I1216 19:35:23.984653   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
	I1216 19:35:23.987150   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:23.987561   14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
	I1216 19:35:23.987593   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:23.987903   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
	I1216 19:35:23.988093   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
	I1216 19:35:23.988215   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
	I1216 19:35:23.988342   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
	I1216 19:35:23.988539   14891 main.go:141] libmachine: Using SSH client type: native
	I1216 19:35:23.988750   14891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1216 19:35:23.988773   14891 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-618388' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-618388/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-618388' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 19:35:24.104303   14891 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 19:35:24.104333   14891 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20091-7083/.minikube CaCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20091-7083/.minikube}
	I1216 19:35:24.104373   14891 buildroot.go:174] setting up certificates
	I1216 19:35:24.104384   14891 provision.go:84] configureAuth start
	I1216 19:35:24.104394   14891 main.go:141] libmachine: (addons-618388) Calling .GetMachineName
	I1216 19:35:24.104666   14891 main.go:141] libmachine: (addons-618388) Calling .GetIP
	I1216 19:35:24.107137   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:24.107483   14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
	I1216 19:35:24.107510   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:24.107662   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
	I1216 19:35:24.109717   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:24.110022   14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
	I1216 19:35:24.110052   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:24.110134   14891 provision.go:143] copyHostCerts
	I1216 19:35:24.110210   14891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem (1082 bytes)
	I1216 19:35:24.110377   14891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem (1123 bytes)
	I1216 19:35:24.110459   14891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem (1679 bytes)
	I1216 19:35:24.110524   14891 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem org=jenkins.addons-618388 san=[127.0.0.1 192.168.39.82 addons-618388 localhost minikube]
	I1216 19:35:24.247178   14891 provision.go:177] copyRemoteCerts
	I1216 19:35:24.247231   14891 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 19:35:24.247265   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
	I1216 19:35:24.249816   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:24.250144   14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
	I1216 19:35:24.250176   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:24.250346   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
	I1216 19:35:24.250554   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
	I1216 19:35:24.250695   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
	I1216 19:35:24.250830   14891 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa Username:docker}
	I1216 19:35:24.330115   14891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 19:35:24.356239   14891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1216 19:35:24.412775   14891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 19:35:24.442720   14891 provision.go:87] duration metric: took 338.323541ms to configureAuth
	I1216 19:35:24.442750   14891 buildroot.go:189] setting minikube options for container-runtime
	I1216 19:35:24.442932   14891 config.go:182] Loaded profile config "addons-618388": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 19:35:24.443023   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
	I1216 19:35:24.445502   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:24.445947   14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
	I1216 19:35:24.445974   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:24.446221   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
	I1216 19:35:24.446397   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
	I1216 19:35:24.446624   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
	I1216 19:35:24.446773   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
	I1216 19:35:24.446965   14891 main.go:141] libmachine: Using SSH client type: native
	I1216 19:35:24.447142   14891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1216 19:35:24.447158   14891 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 19:35:24.949923   14891 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 19:35:24.949948   14891 main.go:141] libmachine: Checking connection to Docker...
	I1216 19:35:24.949957   14891 main.go:141] libmachine: (addons-618388) Calling .GetURL
	I1216 19:35:24.951452   14891 main.go:141] libmachine: (addons-618388) DBG | using libvirt version 6000000
	I1216 19:35:24.953565   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:24.953916   14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
	I1216 19:35:24.953943   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:24.954208   14891 main.go:141] libmachine: Docker is up and running!
	I1216 19:35:24.954232   14891 main.go:141] libmachine: Reticulating splines...
	I1216 19:35:24.954240   14891 client.go:171] duration metric: took 27.998550144s to LocalClient.Create
	I1216 19:35:24.954259   14891 start.go:167] duration metric: took 27.998621198s to libmachine.API.Create "addons-618388"
	I1216 19:35:24.954271   14891 start.go:293] postStartSetup for "addons-618388" (driver="kvm2")
	I1216 19:35:24.954284   14891 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 19:35:24.954314   14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
	I1216 19:35:24.954549   14891 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 19:35:24.954569   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
	I1216 19:35:24.956866   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:24.957175   14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
	I1216 19:35:24.957202   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:24.957329   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
	I1216 19:35:24.957495   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
	I1216 19:35:24.957640   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
	I1216 19:35:24.957791   14891 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa Username:docker}
	I1216 19:35:25.038209   14891 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 19:35:25.042942   14891 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 19:35:25.042982   14891 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/addons for local assets ...
	I1216 19:35:25.043061   14891 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/files for local assets ...
	I1216 19:35:25.043097   14891 start.go:296] duration metric: took 88.819464ms for postStartSetup
	I1216 19:35:25.043133   14891 main.go:141] libmachine: (addons-618388) Calling .GetConfigRaw
	I1216 19:35:25.043802   14891 main.go:141] libmachine: (addons-618388) Calling .GetIP
	I1216 19:35:25.046709   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:25.047131   14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
	I1216 19:35:25.047162   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:25.047428   14891 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/config.json ...
	I1216 19:35:25.047665   14891 start.go:128] duration metric: took 28.110128367s to createHost
	I1216 19:35:25.047694   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
	I1216 19:35:25.050107   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:25.050549   14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
	I1216 19:35:25.050582   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:25.050754   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
	I1216 19:35:25.050953   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
	I1216 19:35:25.051115   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
	I1216 19:35:25.051285   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
	I1216 19:35:25.051457   14891 main.go:141] libmachine: Using SSH client type: native
	I1216 19:35:25.051611   14891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.82 22 <nil> <nil>}
	I1216 19:35:25.051628   14891 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 19:35:25.152694   14891 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734377725.128871797
	
	I1216 19:35:25.152722   14891 fix.go:216] guest clock: 1734377725.128871797
	I1216 19:35:25.152734   14891 fix.go:229] Guest: 2024-12-16 19:35:25.128871797 +0000 UTC Remote: 2024-12-16 19:35:25.047680692 +0000 UTC m=+28.213613803 (delta=81.191105ms)
	I1216 19:35:25.152759   14891 fix.go:200] guest clock delta is within tolerance: 81.191105ms
	I1216 19:35:25.152765   14891 start.go:83] releasing machines lock for "addons-618388", held for 28.215298778s
	I1216 19:35:25.152790   14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
	I1216 19:35:25.153051   14891 main.go:141] libmachine: (addons-618388) Calling .GetIP
	I1216 19:35:25.156351   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:25.156727   14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
	I1216 19:35:25.156756   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:25.156971   14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
	I1216 19:35:25.157496   14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
	I1216 19:35:25.157684   14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
	I1216 19:35:25.157797   14891 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 19:35:25.157855   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
	I1216 19:35:25.157874   14891 ssh_runner.go:195] Run: cat /version.json
	I1216 19:35:25.157889   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
	I1216 19:35:25.160387   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:25.160632   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:25.160727   14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
	I1216 19:35:25.160770   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:25.160898   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
	I1216 19:35:25.161012   14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
	I1216 19:35:25.161050   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:25.161079   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
	I1216 19:35:25.161271   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
	I1216 19:35:25.161289   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
	I1216 19:35:25.161471   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
	I1216 19:35:25.161468   14891 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa Username:docker}
	I1216 19:35:25.161641   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
	I1216 19:35:25.161814   14891 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa Username:docker}
	I1216 19:35:25.264508   14891 ssh_runner.go:195] Run: systemctl --version
	I1216 19:35:25.270933   14891 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 19:35:25.435897   14891 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 19:35:25.442518   14891 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 19:35:25.442585   14891 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 19:35:25.460231   14891 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 19:35:25.460257   14891 start.go:495] detecting cgroup driver to use...
	I1216 19:35:25.460316   14891 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 19:35:25.477318   14891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 19:35:25.491221   14891 docker.go:217] disabling cri-docker service (if available) ...
	I1216 19:35:25.491318   14891 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 19:35:25.506165   14891 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 19:35:25.520318   14891 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 19:35:25.645493   14891 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 19:35:25.792329   14891 docker.go:233] disabling docker service ...
	I1216 19:35:25.792407   14891 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 19:35:25.807438   14891 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 19:35:25.821221   14891 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 19:35:25.954091   14891 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 19:35:26.074391   14891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 19:35:26.088947   14891 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 19:35:26.108956   14891 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1216 19:35:26.109039   14891 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 19:35:26.120980   14891 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 19:35:26.121101   14891 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 19:35:26.132546   14891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 19:35:26.144241   14891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 19:35:26.156253   14891 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 19:35:26.168305   14891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 19:35:26.179945   14891 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 19:35:26.198487   14891 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 19:35:26.210113   14891 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 19:35:26.220621   14891 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 19:35:26.220695   14891 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 19:35:26.237391   14891 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 19:35:26.249281   14891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 19:35:26.382408   14891 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 19:35:26.484383   14891 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 19:35:26.484480   14891 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 19:35:26.490086   14891 start.go:563] Will wait 60s for crictl version
	I1216 19:35:26.490177   14891 ssh_runner.go:195] Run: which crictl
	I1216 19:35:26.494182   14891 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 19:35:26.533934   14891 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 19:35:26.534050   14891 ssh_runner.go:195] Run: crio --version
	I1216 19:35:26.562928   14891 ssh_runner.go:195] Run: crio --version
	I1216 19:35:26.594310   14891 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1216 19:35:26.595780   14891 main.go:141] libmachine: (addons-618388) Calling .GetIP
	I1216 19:35:26.598405   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:26.598711   14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
	I1216 19:35:26.598730   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:26.598965   14891 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1216 19:35:26.603547   14891 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 19:35:26.616735   14891 kubeadm.go:883] updating cluster {Name:addons-618388 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.
0 ClusterName:addons-618388 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.82 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 19:35:26.616834   14891 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 19:35:26.616877   14891 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 19:35:26.651197   14891 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1216 19:35:26.651283   14891 ssh_runner.go:195] Run: which lz4
	I1216 19:35:26.655411   14891 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 19:35:26.659872   14891 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 19:35:26.659914   14891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1216 19:35:28.053331   14891 crio.go:462] duration metric: took 1.397942286s to copy over tarball
	I1216 19:35:28.053420   14891 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 19:35:30.377644   14891 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.324187999s)
	I1216 19:35:30.377674   14891 crio.go:469] duration metric: took 2.324307812s to extract the tarball
	I1216 19:35:30.377684   14891 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 19:35:30.421523   14891 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 19:35:30.475916   14891 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 19:35:30.475941   14891 cache_images.go:84] Images are preloaded, skipping loading
	I1216 19:35:30.475949   14891 kubeadm.go:934] updating node { 192.168.39.82 8443 v1.32.0 crio true true} ...
	I1216 19:35:30.476038   14891 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-618388 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.82
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:addons-618388 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 19:35:30.476114   14891 ssh_runner.go:195] Run: crio config
	I1216 19:35:30.533078   14891 cni.go:84] Creating CNI manager for ""
	I1216 19:35:30.533106   14891 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 19:35:30.533118   14891 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 19:35:30.533149   14891 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.82 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-618388 NodeName:addons-618388 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.82"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.82 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 19:35:30.533301   14891 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.82
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-618388"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.82"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.82"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 19:35:30.533372   14891 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1216 19:35:30.544648   14891 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 19:35:30.544717   14891 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 19:35:30.555742   14891 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1216 19:35:30.576665   14891 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 19:35:30.595189   14891 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I1216 19:35:30.613398   14891 ssh_runner.go:195] Run: grep 192.168.39.82	control-plane.minikube.internal$ /etc/hosts
	I1216 19:35:30.617840   14891 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.82	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 19:35:30.631230   14891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 19:35:30.778750   14891 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 19:35:30.797497   14891 certs.go:68] Setting up /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388 for IP: 192.168.39.82
	I1216 19:35:30.797522   14891 certs.go:194] generating shared ca certs ...
	I1216 19:35:30.797541   14891 certs.go:226] acquiring lock for ca certs: {Name:mk7f8f83a04be3d39897a025f51d4d8228b5a509 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 19:35:30.797677   14891 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key
	I1216 19:35:31.087805   14891 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt ...
	I1216 19:35:31.087836   14891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt: {Name:mk8223f4a742e4125b8daa3a7e32f17d883b5f99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 19:35:31.088009   14891 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key ...
	I1216 19:35:31.088019   14891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key: {Name:mk35573315444553834e6f18cd2b940679ee0f07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 19:35:31.088091   14891 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key
	I1216 19:35:31.149595   14891 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.crt ...
	I1216 19:35:31.149624   14891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.crt: {Name:mk6a3f6f336ce262b90176d6c96cfa7c898ea7bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 19:35:31.149782   14891 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key ...
	I1216 19:35:31.149793   14891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key: {Name:mkf8d43e410cad4aa5548e27f7459158da163348 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 19:35:31.149856   14891 certs.go:256] generating profile certs ...
	I1216 19:35:31.149924   14891 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.key
	I1216 19:35:31.149946   14891 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt with IP's: []
	I1216 19:35:31.259760   14891 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt ...
	I1216 19:35:31.259789   14891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: {Name:mkab02c8a2b648cfe34c559214fe91fe368330f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 19:35:31.259942   14891 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.key ...
	I1216 19:35:31.259953   14891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.key: {Name:mk64939933222e9d48652e54c5f88a941ed2eb34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 19:35:31.260020   14891 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/apiserver.key.3fc635ee
	I1216 19:35:31.260037   14891 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/apiserver.crt.3fc635ee with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.82]
	I1216 19:35:31.349697   14891 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/apiserver.crt.3fc635ee ...
	I1216 19:35:31.349724   14891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/apiserver.crt.3fc635ee: {Name:mkdd15677769fb03ff0f10d64222030963dea71c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 19:35:31.349861   14891 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/apiserver.key.3fc635ee ...
	I1216 19:35:31.349890   14891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/apiserver.key.3fc635ee: {Name:mke374d02b3d78d575e5dab7ea720b1d4fd93514 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 19:35:31.349958   14891 certs.go:381] copying /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/apiserver.crt.3fc635ee -> /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/apiserver.crt
	I1216 19:35:31.350055   14891 certs.go:385] copying /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/apiserver.key.3fc635ee -> /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/apiserver.key
	I1216 19:35:31.350116   14891 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/proxy-client.key
	I1216 19:35:31.350133   14891 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/proxy-client.crt with IP's: []
	I1216 19:35:31.562828   14891 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/proxy-client.crt ...
	I1216 19:35:31.562862   14891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/proxy-client.crt: {Name:mkde20c8645a9d5d6ee2aaa14492fa5df2fc991a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 19:35:31.563049   14891 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/proxy-client.key ...
	I1216 19:35:31.563065   14891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/proxy-client.key: {Name:mkaafb8d808060db36b0da2fab045a5e8b677276 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 19:35:31.563289   14891 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 19:35:31.563336   14891 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem (1082 bytes)
	I1216 19:35:31.563371   14891 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem (1123 bytes)
	I1216 19:35:31.563408   14891 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem (1679 bytes)
	I1216 19:35:31.563963   14891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 19:35:31.600224   14891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 19:35:31.624564   14891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 19:35:31.654903   14891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 19:35:31.685331   14891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 19:35:31.714782   14891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 19:35:31.744799   14891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 19:35:31.774469   14891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 19:35:31.803877   14891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 19:35:31.830743   14891 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 19:35:31.848650   14891 ssh_runner.go:195] Run: openssl version
	I1216 19:35:31.854999   14891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 19:35:31.866738   14891 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 19:35:31.871822   14891 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1216 19:35:31.871904   14891 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 19:35:31.878205   14891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 19:35:31.889756   14891 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 19:35:31.894644   14891 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 19:35:31.894693   14891 kubeadm.go:392] StartCluster: {Name:addons-618388 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 C
lusterName:addons-618388 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.82 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 19:35:31.894758   14891 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 19:35:31.894798   14891 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 19:35:31.939435   14891 cri.go:89] found id: ""
	I1216 19:35:31.939512   14891 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 19:35:31.949992   14891 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 19:35:31.960189   14891 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 19:35:31.970213   14891 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 19:35:31.970234   14891 kubeadm.go:157] found existing configuration files:
	
	I1216 19:35:31.970273   14891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 19:35:31.980228   14891 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 19:35:31.980320   14891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 19:35:31.990575   14891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 19:35:32.000543   14891 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 19:35:32.000597   14891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 19:35:32.010508   14891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 19:35:32.019843   14891 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 19:35:32.019902   14891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 19:35:32.029538   14891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 19:35:32.038950   14891 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 19:35:32.039021   14891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 19:35:32.048432   14891 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 19:35:32.101787   14891 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I1216 19:35:32.101920   14891 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 19:35:32.202928   14891 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 19:35:32.203087   14891 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 19:35:32.203266   14891 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 19:35:32.211479   14891 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 19:35:32.379326   14891 out.go:235]   - Generating certificates and keys ...
	I1216 19:35:32.379454   14891 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 19:35:32.379511   14891 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 19:35:32.454399   14891 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 19:35:32.870562   14891 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1216 19:35:32.990656   14891 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1216 19:35:33.186140   14891 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1216 19:35:33.323504   14891 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1216 19:35:33.323662   14891 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-618388 localhost] and IPs [192.168.39.82 127.0.0.1 ::1]
	I1216 19:35:33.491338   14891 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1216 19:35:33.491521   14891 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-618388 localhost] and IPs [192.168.39.82 127.0.0.1 ::1]
	I1216 19:35:33.644477   14891 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 19:35:33.846194   14891 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 19:35:34.027988   14891 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1216 19:35:34.028071   14891 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 19:35:34.146136   14891 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 19:35:34.260160   14891 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 19:35:34.421396   14891 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 19:35:34.669541   14891 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 19:35:34.863801   14891 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 19:35:34.864284   14891 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 19:35:34.866624   14891 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 19:35:34.868564   14891 out.go:235]   - Booting up control plane ...
	I1216 19:35:34.868688   14891 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 19:35:34.868801   14891 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 19:35:34.868900   14891 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 19:35:34.884169   14891 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 19:35:34.890798   14891 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 19:35:34.890855   14891 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 19:35:35.026773   14891 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 19:35:35.027989   14891 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 19:35:35.528682   14891 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.301534ms
	I1216 19:35:35.528811   14891 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1216 19:35:40.530645   14891 kubeadm.go:310] [api-check] The API server is healthy after 5.002453214s
	I1216 19:35:40.542897   14891 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 19:35:40.562835   14891 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 19:35:40.597774   14891 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 19:35:40.598017   14891 kubeadm.go:310] [mark-control-plane] Marking the node addons-618388 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 19:35:40.611055   14891 kubeadm.go:310] [bootstrap-token] Using token: xt4tac.l3e3u4qwnc85x3px
	I1216 19:35:40.612916   14891 out.go:235]   - Configuring RBAC rules ...
	I1216 19:35:40.613085   14891 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 19:35:40.618695   14891 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 19:35:40.625614   14891 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 19:35:40.629599   14891 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 19:35:40.636724   14891 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 19:35:40.640252   14891 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 19:35:40.934793   14891 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 19:35:41.368529   14891 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1216 19:35:41.940442   14891 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1216 19:35:41.942708   14891 kubeadm.go:310] 
	I1216 19:35:41.942806   14891 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1216 19:35:41.942818   14891 kubeadm.go:310] 
	I1216 19:35:41.942942   14891 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1216 19:35:41.942954   14891 kubeadm.go:310] 
	I1216 19:35:41.942985   14891 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1216 19:35:41.944078   14891 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 19:35:41.944163   14891 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 19:35:41.944175   14891 kubeadm.go:310] 
	I1216 19:35:41.944239   14891 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1216 19:35:41.944249   14891 kubeadm.go:310] 
	I1216 19:35:41.944322   14891 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 19:35:41.944332   14891 kubeadm.go:310] 
	I1216 19:35:41.944413   14891 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1216 19:35:41.944541   14891 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 19:35:41.944657   14891 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 19:35:41.944674   14891 kubeadm.go:310] 
	I1216 19:35:41.944775   14891 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 19:35:41.944863   14891 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1216 19:35:41.944874   14891 kubeadm.go:310] 
	I1216 19:35:41.945002   14891 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xt4tac.l3e3u4qwnc85x3px \
	I1216 19:35:41.945157   14891 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e03b60b144334bf383a3d22daeca854a6b4004373f1847ba3afcb85a998b5735 \
	I1216 19:35:41.945190   14891 kubeadm.go:310] 	--control-plane 
	I1216 19:35:41.945200   14891 kubeadm.go:310] 
	I1216 19:35:41.945338   14891 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1216 19:35:41.945351   14891 kubeadm.go:310] 
	I1216 19:35:41.945460   14891 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xt4tac.l3e3u4qwnc85x3px \
	I1216 19:35:41.945610   14891 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e03b60b144334bf383a3d22daeca854a6b4004373f1847ba3afcb85a998b5735 
	I1216 19:35:41.946217   14891 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 19:35:41.947689   14891 cni.go:84] Creating CNI manager for ""
	I1216 19:35:41.947704   14891 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 19:35:41.949329   14891 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 19:35:41.950698   14891 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 19:35:41.962591   14891 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 19:35:41.982892   14891 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 19:35:41.983006   14891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 19:35:41.983029   14891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-618388 minikube.k8s.io/updated_at=2024_12_16T19_35_41_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=74e51ab701402ddc00f8ba70f2a2775c7dcd6477 minikube.k8s.io/name=addons-618388 minikube.k8s.io/primary=true
	I1216 19:35:42.006830   14891 ops.go:34] apiserver oom_adj: -16
	I1216 19:35:42.132903   14891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 19:35:42.633212   14891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 19:35:43.133717   14891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 19:35:43.633740   14891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 19:35:44.133125   14891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 19:35:44.633784   14891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 19:35:45.133784   14891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 19:35:45.217213   14891 kubeadm.go:1113] duration metric: took 3.234269906s to wait for elevateKubeSystemPrivileges
	I1216 19:35:45.217247   14891 kubeadm.go:394] duration metric: took 13.322556676s to StartCluster
	I1216 19:35:45.217268   14891 settings.go:142] acquiring lock: {Name:mke62e1d1fa6bfae09410847a3fc6f95d0bbbd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 19:35:45.217414   14891 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 19:35:45.217794   14891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/kubeconfig: {Name:mk67073c6dc9abd712825d4490d6430745897f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 19:35:45.217985   14891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 19:35:45.218007   14891 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.82 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 19:35:45.218124   14891 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1216 19:35:45.218249   14891 config.go:182] Loaded profile config "addons-618388": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 19:35:45.218260   14891 addons.go:69] Setting yakd=true in profile "addons-618388"
	I1216 19:35:45.218267   14891 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-618388"
	I1216 19:35:45.218290   14891 addons.go:234] Setting addon yakd=true in "addons-618388"
	I1216 19:35:45.218287   14891 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-618388"
	I1216 19:35:45.218310   14891 addons.go:69] Setting metrics-server=true in profile "addons-618388"
	I1216 19:35:45.218314   14891 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-618388"
	I1216 19:35:45.218321   14891 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-618388"
	I1216 19:35:45.218325   14891 addons.go:234] Setting addon metrics-server=true in "addons-618388"
	I1216 19:35:45.218330   14891 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-618388"
	I1216 19:35:45.218326   14891 addons.go:69] Setting cloud-spanner=true in profile "addons-618388"
	I1216 19:35:45.218341   14891 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-618388"
	I1216 19:35:45.218347   14891 addons.go:234] Setting addon cloud-spanner=true in "addons-618388"
	I1216 19:35:45.218347   14891 host.go:66] Checking if "addons-618388" exists ...
	I1216 19:35:45.218358   14891 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-618388"
	I1216 19:35:45.218363   14891 host.go:66] Checking if "addons-618388" exists ...
	I1216 19:35:45.218347   14891 host.go:66] Checking if "addons-618388" exists ...
	I1216 19:35:45.218366   14891 addons.go:69] Setting default-storageclass=true in profile "addons-618388"
	I1216 19:35:45.218568   14891 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-618388"
	I1216 19:35:45.218322   14891 host.go:66] Checking if "addons-618388" exists ...
	I1216 19:35:45.218811   14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:35:45.218832   14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:35:45.218850   14891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:35:45.218866   14891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:35:45.218371   14891 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-618388"
	I1216 19:35:45.218374   14891 host.go:66] Checking if "addons-618388" exists ...
	I1216 19:35:45.218948   14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:35:45.218986   14891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:35:45.218376   14891 addons.go:69] Setting gcp-auth=true in profile "addons-618388"
	I1216 19:35:45.219070   14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:35:45.218953   14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:35:45.218384   14891 addons.go:69] Setting ingress=true in profile "addons-618388"
	I1216 19:35:45.219097   14891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:35:45.219109   14891 addons.go:234] Setting addon ingress=true in "addons-618388"
	I1216 19:35:45.218384   14891 addons.go:69] Setting storage-provisioner=true in profile "addons-618388"
	I1216 19:35:45.219124   14891 addons.go:234] Setting addon storage-provisioner=true in "addons-618388"
	I1216 19:35:45.218388   14891 addons.go:69] Setting ingress-dns=true in profile "addons-618388"
	I1216 19:35:45.219165   14891 addons.go:234] Setting addon ingress-dns=true in "addons-618388"
	I1216 19:35:45.219204   14891 host.go:66] Checking if "addons-618388" exists ...
	I1216 19:35:45.219455   14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:35:45.219488   14891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:35:45.219576   14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:35:45.219070   14891 mustload.go:65] Loading cluster: addons-618388
	I1216 19:35:45.219613   14891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:35:45.219583   14891 host.go:66] Checking if "addons-618388" exists ...
	I1216 19:35:45.218390   14891 addons.go:69] Setting inspektor-gadget=true in profile "addons-618388"
	I1216 19:35:45.219741   14891 addons.go:234] Setting addon inspektor-gadget=true in "addons-618388"
	I1216 19:35:45.218347   14891 host.go:66] Checking if "addons-618388" exists ...
	I1216 19:35:45.219774   14891 config.go:182] Loaded profile config "addons-618388": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 19:35:45.220016   14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:35:45.220052   14891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:35:45.218395   14891 addons.go:69] Setting volcano=true in profile "addons-618388"
	I1216 19:35:45.220123   14891 addons.go:234] Setting addon volcano=true in "addons-618388"
	I1216 19:35:45.220141   14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:35:45.220200   14891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:35:45.220246   14891 host.go:66] Checking if "addons-618388" exists ...
	I1216 19:35:45.220337   14891 host.go:66] Checking if "addons-618388" exists ...
	I1216 19:35:45.220576   14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:35:45.220616   14891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:35:45.220155   14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:35:45.220716   14891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:35:45.219624   14891 host.go:66] Checking if "addons-618388" exists ...
	I1216 19:35:45.220804   14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:35:45.220862   14891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:35:45.219146   14891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:35:45.221270   14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:35:45.221314   14891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:35:45.218400   14891 addons.go:69] Setting volumesnapshots=true in profile "addons-618388"
	I1216 19:35:45.223835   14891 addons.go:234] Setting addon volumesnapshots=true in "addons-618388"
	I1216 19:35:45.223877   14891 host.go:66] Checking if "addons-618388" exists ...
	I1216 19:35:45.224274   14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:35:45.224307   14891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:35:45.227556   14891 out.go:177] * Verifying Kubernetes components...
	I1216 19:35:45.218379   14891 addons.go:69] Setting registry=true in profile "addons-618388"
	I1216 19:35:45.227892   14891 addons.go:234] Setting addon registry=true in "addons-618388"
	I1216 19:35:45.227937   14891 host.go:66] Checking if "addons-618388" exists ...
	I1216 19:35:45.228364   14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:35:45.228410   14891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:35:45.238268   14891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 19:35:45.240677   14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35731
	I1216 19:35:45.240829   14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43755
	I1216 19:35:45.240997   14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39977
	I1216 19:35:45.241471   14891 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:35:45.241600   14891 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:35:45.241742   14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45505
	I1216 19:35:45.241982   14891 main.go:141] libmachine: Using API Version  1
	I1216 19:35:45.241996   14891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:35:45.242055   14891 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:35:45.242119   14891 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:35:45.243018   14891 main.go:141] libmachine: Using API Version  1
	I1216 19:35:45.243037   14891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:35:45.243162   14891 main.go:141] libmachine: Using API Version  1
	I1216 19:35:45.243177   14891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:35:45.243232   14891 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:35:45.243396   14891 main.go:141] libmachine: Using API Version  1
	I1216 19:35:45.243427   14891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:35:45.243478   14891 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:35:45.243517   14891 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:35:45.244071   14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:35:45.244104   14891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:35:45.259392   14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42639
	I1216 19:35:45.259393   14891 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:35:45.259407   14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41145
	I1216 19:35:45.259780   14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:35:45.259819   14891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:35:45.259903   14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:35:45.259944   14891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:35:45.272019   14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:35:45.272089   14891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:35:45.259779   14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:35:45.272338   14891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:35:45.272431   14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37773
	I1216 19:35:45.272433   14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33589
	I1216 19:35:45.272605   14891 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:35:45.272851   14891 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:35:45.272984   14891 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:35:45.273041   14891 main.go:141] libmachine: Using API Version  1
	I1216 19:35:45.273070   14891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:35:45.273606   14891 main.go:141] libmachine: Using API Version  1
	I1216 19:35:45.273621   14891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:35:45.273686   14891 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:35:45.274485   14891 main.go:141] libmachine: Using API Version  1
	I1216 19:35:45.274514   14891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:35:45.274640   14891 main.go:141] libmachine: Using API Version  1
	I1216 19:35:45.274650   14891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:35:45.274709   14891 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:35:45.274856   14891 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:35:45.274916   14891 main.go:141] libmachine: (addons-618388) Calling .GetState
	I1216 19:35:45.274958   14891 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:35:45.275417   14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:35:45.275452   14891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:35:45.275987   14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:35:45.276024   14891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:35:45.278411   14891 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:35:45.282920   14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38029
	I1216 19:35:45.283475   14891 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:35:45.284033   14891 main.go:141] libmachine: Using API Version  1
	I1216 19:35:45.284058   14891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:35:45.284445   14891 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:35:45.284619   14891 main.go:141] libmachine: (addons-618388) Calling .GetState
	I1216 19:35:45.288176   14891 addons.go:234] Setting addon default-storageclass=true in "addons-618388"
	I1216 19:35:45.288218   14891 host.go:66] Checking if "addons-618388" exists ...
	I1216 19:35:45.288590   14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:35:45.288626   14891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:35:45.288724   14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34905
	I1216 19:35:45.292103   14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33465
	I1216 19:35:45.292677   14891 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:35:45.293222   14891 main.go:141] libmachine: Using API Version  1
	I1216 19:35:45.293251   14891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:35:45.293658   14891 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:35:45.293909   14891 main.go:141] libmachine: (addons-618388) Calling .GetState
	I1216 19:35:45.294667   14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46397
	I1216 19:35:45.295147   14891 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:35:45.295739   14891 main.go:141] libmachine: Using API Version  1
	I1216 19:35:45.295756   14891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:35:45.296142   14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
	I1216 19:35:45.296208   14891 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:35:45.296590   14891 main.go:141] libmachine: (addons-618388) Calling .GetState
	I1216 19:35:45.298369   14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
	I1216 19:35:45.298427   14891 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1216 19:35:45.298747   14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36303
	I1216 19:35:45.300285   14891 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1216 19:35:45.300308   14891 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1216 19:35:45.300335   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
	I1216 19:35:45.300404   14891 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1216 19:35:45.301881   14891 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:35:45.302084   14891 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1216 19:35:45.302103   14891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1216 19:35:45.302123   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
	I1216 19:35:45.303198   14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44077
	I1216 19:35:45.303393   14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39303
	I1216 19:35:45.303649   14891 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:35:45.312020   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:45.312062   14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
	I1216 19:35:45.312092   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:45.312238   14891 main.go:141] libmachine: Using API Version  1
	I1216 19:35:45.312249   14891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:35:45.312839   14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:35:45.312881   14891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:35:45.315747   14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33495
	I1216 19:35:45.317481   14891 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-618388"
	I1216 19:35:45.317540   14891 host.go:66] Checking if "addons-618388" exists ...
	I1216 19:35:45.318020   14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:35:45.318072   14891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:35:45.318370   14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34299
	I1216 19:35:45.318689   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:45.318714   14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
	I1216 19:35:45.318747   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:45.318786   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
	I1216 19:35:45.318845   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
	I1216 19:35:45.318909   14891 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:35:45.318938   14891 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:35:45.319003   14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46107
	I1216 19:35:45.319060   14891 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:35:45.319086   14891 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:35:45.319610   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
	I1216 19:35:45.319678   14891 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:35:45.319787   14891 main.go:141] libmachine: (addons-618388) Calling .GetState
	I1216 19:35:45.319811   14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41847
	I1216 19:35:45.319902   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
	I1216 19:35:45.320230   14891 main.go:141] libmachine: Using API Version  1
	I1216 19:35:45.320251   14891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:35:45.320360   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
	I1216 19:35:45.320524   14891 main.go:141] libmachine: Using API Version  1
	I1216 19:35:45.320544   14891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:35:45.320604   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
	I1216 19:35:45.320738   14891 main.go:141] libmachine: Using API Version  1
	I1216 19:35:45.320763   14891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:35:45.320844   14891 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:35:45.320886   14891 main.go:141] libmachine: Using API Version  1
	I1216 19:35:45.320906   14891 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:35:45.320928   14891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:35:45.320988   14891 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:35:45.321038   14891 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa Username:docker}
	I1216 19:35:45.321399   14891 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:35:45.321475   14891 main.go:141] libmachine: Using API Version  1
	I1216 19:35:45.321492   14891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:35:45.321560   14891 main.go:141] libmachine: (addons-618388) Calling .GetState
	I1216 19:35:45.321604   14891 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa Username:docker}
	I1216 19:35:45.321906   14891 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:35:45.322268   14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:35:45.322350   14891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:35:45.323033   14891 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:35:45.322931   14891 host.go:66] Checking if "addons-618388" exists ...
	I1216 19:35:45.323295   14891 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:35:45.323688   14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:35:45.323728   14891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:35:45.323746   14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:35:45.323779   14891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:35:45.323788   14891 main.go:141] libmachine: Using API Version  1
	I1216 19:35:45.323801   14891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:35:45.323963   14891 main.go:141] libmachine: Using API Version  1
	I1216 19:35:45.323988   14891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:35:45.324095   14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:35:45.324133   14891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:35:45.324335   14891 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:35:45.324540   14891 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:35:45.324860   14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
	I1216 19:35:45.324999   14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:35:45.325048   14891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:35:45.325128   14891 main.go:141] libmachine: (addons-618388) Calling .GetState
	I1216 19:35:45.325927   14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:35:45.325970   14891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:35:45.326771   14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
	I1216 19:35:45.328162   14891 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1216 19:35:45.329392   14891 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1216 19:35:45.331044   14891 out.go:177]   - Using image docker.io/registry:2.8.3
	I1216 19:35:45.331171   14891 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1216 19:35:45.331190   14891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1216 19:35:45.331220   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
	I1216 19:35:45.332051   14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43309
	I1216 19:35:45.332659   14891 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1216 19:35:45.332675   14891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1216 19:35:45.332695   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
	I1216 19:35:45.336411   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:45.337791   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:45.338188   14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
	I1216 19:35:45.338210   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:45.338285   14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39383
	I1216 19:35:45.338532   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
	I1216 19:35:45.338630   14891 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:35:45.338742   14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
	I1216 19:35:45.338773   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:45.338816   14891 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:35:45.339029   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
	I1216 19:35:45.339089   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
	I1216 19:35:45.339289   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
	I1216 19:35:45.339456   14891 main.go:141] libmachine: Using API Version  1
	I1216 19:35:45.339563   14891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:35:45.339501   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
	I1216 19:35:45.339768   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
	I1216 19:35:45.339784   14891 main.go:141] libmachine: Using API Version  1
	I1216 19:35:45.339803   14891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:35:45.339916   14891 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa Username:docker}
	I1216 19:35:45.340005   14891 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa Username:docker}
	I1216 19:35:45.340671   14891 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:35:45.340783   14891 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:35:45.341321   14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:35:45.341385   14891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:35:45.341738   14891 main.go:141] libmachine: (addons-618388) Calling .GetState
	I1216 19:35:45.342065   14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45021
	I1216 19:35:45.342798   14891 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:35:45.343506   14891 main.go:141] libmachine: Using API Version  1
	I1216 19:35:45.343525   14891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:35:45.344184   14891 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:35:45.344446   14891 main.go:141] libmachine: (addons-618388) Calling .GetState
	I1216 19:35:45.346151   14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
	I1216 19:35:45.348242   14891 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1216 19:35:45.349631   14891 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1216 19:35:45.349656   14891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1216 19:35:45.349678   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
	I1216 19:35:45.352041   14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
	I1216 19:35:45.353008   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:45.353390   14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
	I1216 19:35:45.353414   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:45.353638   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
	I1216 19:35:45.353846   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
	I1216 19:35:45.353902   14891 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1216 19:35:45.354075   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
	I1216 19:35:45.354239   14891 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa Username:docker}
	I1216 19:35:45.355235   14891 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1216 19:35:45.355284   14891 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1216 19:35:45.355306   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
	I1216 19:35:45.358330   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:45.358692   14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
	I1216 19:35:45.358713   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:45.358976   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
	I1216 19:35:45.359146   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
	I1216 19:35:45.359281   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
	I1216 19:35:45.359383   14891 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa Username:docker}
	I1216 19:35:45.366276   14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46423
	I1216 19:35:45.366421   14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45759
	I1216 19:35:45.366887   14891 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:35:45.367558   14891 main.go:141] libmachine: Using API Version  1
	I1216 19:35:45.367578   14891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:35:45.367994   14891 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:35:45.368208   14891 main.go:141] libmachine: (addons-618388) Calling .GetState
	I1216 19:35:45.368887   14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44253
	I1216 19:35:45.369063   14891 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:35:45.369151   14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41825
	I1216 19:35:45.369562   14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44317
	I1216 19:35:45.369651   14891 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:35:45.369662   14891 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:35:45.370224   14891 main.go:141] libmachine: Using API Version  1
	I1216 19:35:45.370245   14891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:35:45.370353   14891 main.go:141] libmachine: Using API Version  1
	I1216 19:35:45.370369   14891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:35:45.370566   14891 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:35:45.371001   14891 main.go:141] libmachine: Using API Version  1
	I1216 19:35:45.371015   14891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:35:45.371039   14891 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:35:45.371409   14891 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:35:45.371832   14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:35:45.371881   14891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:35:45.372111   14891 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:35:45.372181   14891 main.go:141] libmachine: (addons-618388) Calling .GetState
	I1216 19:35:45.372849   14891 main.go:141] libmachine: (addons-618388) Calling .GetState
	I1216 19:35:45.373603   14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46735
	I1216 19:35:45.374201   14891 main.go:141] libmachine: Using API Version  1
	I1216 19:35:45.374221   14891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:35:45.374296   14891 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:35:45.374368   14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
	I1216 19:35:45.374773   14891 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:35:45.374919   14891 main.go:141] libmachine: Using API Version  1
	I1216 19:35:45.374931   14891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:35:45.374976   14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
	I1216 19:35:45.375971   14891 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:35:45.376034   14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
	I1216 19:35:45.376477   14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:35:45.376516   14891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:35:45.376734   14891 main.go:141] libmachine: (addons-618388) Calling .GetState
	I1216 19:35:45.376991   14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44167
	I1216 19:35:45.377692   14891 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:35:45.378257   14891 main.go:141] libmachine: Using API Version  1
	I1216 19:35:45.378282   14891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:35:45.378712   14891 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:35:45.378742   14891 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1216 19:35:45.378879   14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
	I1216 19:35:45.378909   14891 main.go:141] libmachine: (addons-618388) Calling .GetState
	I1216 19:35:45.379651   14891 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 19:35:45.379905   14891 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1216 19:35:45.380018   14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42903
	I1216 19:35:45.380661   14891 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1216 19:35:45.380678   14891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1216 19:35:45.380697   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
	I1216 19:35:45.381520   14891 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1216 19:35:45.381527   14891 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 19:35:45.381552   14891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 19:35:45.381556   14891 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1216 19:35:45.381568   14891 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1216 19:35:45.381572   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
	I1216 19:35:45.381587   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
	I1216 19:35:45.381647   14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
	I1216 19:35:45.382771   14891 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:35:45.383385   14891 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1216 19:35:45.383511   14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33761
	I1216 19:35:45.384241   14891 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:35:45.384536   14891 main.go:141] libmachine: Using API Version  1
	I1216 19:35:45.384549   14891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:35:45.384809   14891 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1216 19:35:45.385064   14891 main.go:141] libmachine: Using API Version  1
	I1216 19:35:45.385078   14891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:35:45.385933   14891 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:35:45.386070   14891 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1216 19:35:45.386226   14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
	I1216 19:35:45.386613   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:45.386695   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:45.386705   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:45.387199   14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
	I1216 19:35:45.387216   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:45.387222   14891 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1216 19:35:45.387294   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
	I1216 19:35:45.387358   14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
	I1216 19:35:45.387375   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:45.387402   14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
	I1216 19:35:45.387413   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:45.387434   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
	I1216 19:35:45.387479   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
	I1216 19:35:45.387274   14891 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:35:45.387736   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
	I1216 19:35:45.387784   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
	I1216 19:35:45.387821   14891 main.go:141] libmachine: (addons-618388) Calling .GetState
	I1216 19:35:45.387853   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
	I1216 19:35:45.387884   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
	I1216 19:35:45.387914   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
	I1216 19:35:45.388052   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
	I1216 19:35:45.388103   14891 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa Username:docker}
	I1216 19:35:45.388420   14891 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa Username:docker}
	I1216 19:35:45.388594   14891 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1216 19:35:45.388737   14891 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1216 19:35:45.388751   14891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1216 19:35:45.388767   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
	I1216 19:35:45.388818   14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39667
	I1216 19:35:45.388878   14891 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa Username:docker}
	I1216 19:35:45.389117   14891 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:35:45.389867   14891 main.go:141] libmachine: Using API Version  1
	I1216 19:35:45.389949   14891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:35:45.390296   14891 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:35:45.390424   14891 main.go:141] libmachine: (addons-618388) Calling .GetState
	I1216 19:35:45.391094   14891 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1216 19:35:45.391354   14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
	I1216 19:35:45.392024   14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
	I1216 19:35:45.392316   14891 main.go:141] libmachine: Making call to close driver server
	I1216 19:35:45.392345   14891 main.go:141] libmachine: (addons-618388) Calling .Close
	I1216 19:35:45.393288   14891 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1216 19:35:45.394238   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
	I1216 19:35:45.394244   14891 main.go:141] libmachine: Successfully made call to close driver server
	I1216 19:35:45.394257   14891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 19:35:45.394268   14891 main.go:141] libmachine: Making call to close driver server
	I1216 19:35:45.394267   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:45.394294   14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
	I1216 19:35:45.394317   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:45.394274   14891 main.go:141] libmachine: (addons-618388) Calling .Close
	I1216 19:35:45.394404   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
	I1216 19:35:45.394552   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
	I1216 19:35:45.394613   14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
	I1216 19:35:45.394624   14891 main.go:141] libmachine: Successfully made call to close driver server
	I1216 19:35:45.394630   14891 main.go:141] libmachine: Making call to close connection to plugin binary
	W1216 19:35:45.394691   14891 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1216 19:35:45.394885   14891 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa Username:docker}
	I1216 19:35:45.394978   14891 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
	I1216 19:35:45.395838   14891 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1216 19:35:45.396690   14891 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1216 19:35:45.396711   14891 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1216 19:35:45.396728   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
	I1216 19:35:45.398749   14891 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1216 19:35:45.400002   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:45.400064   14891 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1216 19:35:45.400452   14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
	I1216 19:35:45.400470   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:45.400609   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
	I1216 19:35:45.400771   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
	I1216 19:35:45.400946   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
	I1216 19:35:45.401101   14891 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa Username:docker}
	I1216 19:35:45.401251   14891 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1216 19:35:45.401260   14891 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1216 19:35:45.401273   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
	I1216 19:35:45.402159   14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38113
	I1216 19:35:45.402665   14891 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:35:45.403399   14891 main.go:141] libmachine: Using API Version  1
	I1216 19:35:45.403417   14891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:35:45.403729   14891 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:35:45.404032   14891 main.go:141] libmachine: (addons-618388) Calling .GetState
	I1216 19:35:45.407360   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
	I1216 19:35:45.407388   14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
	I1216 19:35:45.407407   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:45.407421   14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
	I1216 19:35:45.407439   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:45.407582   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
	I1216 19:35:45.407697   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
	I1216 19:35:45.407831   14891 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa Username:docker}
	I1216 19:35:45.409274   14891 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1216 19:35:45.409607   14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43099
	I1216 19:35:45.410041   14891 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:35:45.410529   14891 main.go:141] libmachine: Using API Version  1
	I1216 19:35:45.410542   14891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:35:45.410887   14891 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:35:45.411054   14891 main.go:141] libmachine: (addons-618388) Calling .GetState
	I1216 19:35:45.412018   14891 out.go:177]   - Using image docker.io/busybox:stable
	I1216 19:35:45.412705   14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
	I1216 19:35:45.412897   14891 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 19:35:45.412938   14891 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 19:35:45.412969   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
	I1216 19:35:45.413991   14891 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1216 19:35:45.414011   14891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1216 19:35:45.414030   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
	I1216 19:35:45.416891   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:45.417201   14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
	I1216 19:35:45.417237   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:45.417281   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:45.417351   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
	I1216 19:35:45.417560   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
	I1216 19:35:45.417739   14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
	I1216 19:35:45.417764   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:45.417745   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
	I1216 19:35:45.417832   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
	I1216 19:35:45.417931   14891 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa Username:docker}
	I1216 19:35:45.417963   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
	I1216 19:35:45.418074   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
	I1216 19:35:45.418174   14891 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa Username:docker}
	W1216 19:35:45.421578   14891 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:43616->192.168.39.82:22: read: connection reset by peer
	I1216 19:35:45.421609   14891 retry.go:31] will retry after 278.37327ms: ssh: handshake failed: read tcp 192.168.39.1:43616->192.168.39.82:22: read: connection reset by peer
	I1216 19:35:45.741436   14891 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 19:35:45.741613   14891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 19:35:45.754977   14891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1216 19:35:45.796049   14891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1216 19:35:45.796351   14891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1216 19:35:45.802207   14891 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1216 19:35:45.802232   14891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1216 19:35:45.881326   14891 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1216 19:35:45.881347   14891 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1216 19:35:45.918485   14891 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1216 19:35:45.918517   14891 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1216 19:35:45.927120   14891 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1216 19:35:45.927150   14891 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1216 19:35:45.951922   14891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 19:35:45.954107   14891 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1216 19:35:45.954132   14891 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1216 19:35:45.964446   14891 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1216 19:35:45.964469   14891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
	I1216 19:35:45.991496   14891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1216 19:35:46.010176   14891 node_ready.go:35] waiting up to 6m0s for node "addons-618388" to be "Ready" ...
	I1216 19:35:46.026897   14891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1216 19:35:46.034081   14891 node_ready.go:49] node "addons-618388" has status "Ready":"True"
	I1216 19:35:46.034112   14891 node_ready.go:38] duration metric: took 23.902784ms for node "addons-618388" to be "Ready" ...
	I1216 19:35:46.034127   14891 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 19:35:46.062622   14891 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-jqhz4" in "kube-system" namespace to be "Ready" ...
	I1216 19:35:46.149755   14891 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1216 19:35:46.149794   14891 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1216 19:35:46.153498   14891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1216 19:35:46.165027   14891 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1216 19:35:46.165050   14891 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1216 19:35:46.233019   14891 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1216 19:35:46.233044   14891 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1216 19:35:46.244692   14891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1216 19:35:46.265574   14891 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1216 19:35:46.265605   14891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1216 19:35:46.294298   14891 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1216 19:35:46.294324   14891 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1216 19:35:46.302149   14891 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1216 19:35:46.302174   14891 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1216 19:35:46.369522   14891 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 19:35:46.369549   14891 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1216 19:35:46.400466   14891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 19:35:46.418920   14891 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1216 19:35:46.418954   14891 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1216 19:35:46.516855   14891 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1216 19:35:46.516883   14891 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1216 19:35:46.586798   14891 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1216 19:35:46.586826   14891 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1216 19:35:46.592487   14891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1216 19:35:46.654821   14891 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1216 19:35:46.654844   14891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1216 19:35:46.672171   14891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 19:35:46.782627   14891 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1216 19:35:46.782659   14891 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1216 19:35:46.792217   14891 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1216 19:35:46.792261   14891 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1216 19:35:46.925990   14891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1216 19:35:46.963050   14891 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 19:35:46.963081   14891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1216 19:35:46.975101   14891 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1216 19:35:46.975123   14891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1216 19:35:47.076600   14891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 19:35:47.285758   14891 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1216 19:35:47.285793   14891 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1216 19:35:47.616242   14891 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1216 19:35:47.616273   14891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1216 19:35:47.966049   14891 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1216 19:35:47.966079   14891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1216 19:35:48.071985   14891 pod_ready.go:103] pod "coredns-668d6bf9bc-jqhz4" in "kube-system" namespace has status "Ready":"False"
	I1216 19:35:48.350403   14891 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1216 19:35:48.350428   14891 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1216 19:35:48.574529   14891 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.832886184s)
	I1216 19:35:48.574563   14891 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1216 19:35:48.574579   14891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.819563385s)
	I1216 19:35:48.574630   14891 main.go:141] libmachine: Making call to close driver server
	I1216 19:35:48.574644   14891 main.go:141] libmachine: (addons-618388) Calling .Close
	I1216 19:35:48.574944   14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
	I1216 19:35:48.575029   14891 main.go:141] libmachine: Successfully made call to close driver server
	I1216 19:35:48.575042   14891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 19:35:48.575050   14891 main.go:141] libmachine: Making call to close driver server
	I1216 19:35:48.575058   14891 main.go:141] libmachine: (addons-618388) Calling .Close
	I1216 19:35:48.575290   14891 main.go:141] libmachine: Successfully made call to close driver server
	I1216 19:35:48.575309   14891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 19:35:48.751882   14891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1216 19:35:49.089711   14891 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-618388" context rescaled to 1 replicas
	I1216 19:35:49.496650   14891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.700560289s)
	I1216 19:35:49.496706   14891 main.go:141] libmachine: Making call to close driver server
	I1216 19:35:49.496670   14891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.700294926s)
	I1216 19:35:49.496762   14891 main.go:141] libmachine: Making call to close driver server
	I1216 19:35:49.496717   14891 main.go:141] libmachine: (addons-618388) Calling .Close
	I1216 19:35:49.496837   14891 main.go:141] libmachine: (addons-618388) Calling .Close
	I1216 19:35:49.497148   14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
	I1216 19:35:49.497187   14891 main.go:141] libmachine: Successfully made call to close driver server
	I1216 19:35:49.497195   14891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 19:35:49.497193   14891 main.go:141] libmachine: Successfully made call to close driver server
	I1216 19:35:49.497203   14891 main.go:141] libmachine: Making call to close driver server
	I1216 19:35:49.497206   14891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 19:35:49.497214   14891 main.go:141] libmachine: Making call to close driver server
	I1216 19:35:49.497220   14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
	I1216 19:35:49.497233   14891 main.go:141] libmachine: (addons-618388) Calling .Close
	I1216 19:35:49.497222   14891 main.go:141] libmachine: (addons-618388) Calling .Close
	I1216 19:35:49.497564   14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
	I1216 19:35:49.497622   14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
	I1216 19:35:49.497629   14891 main.go:141] libmachine: Successfully made call to close driver server
	I1216 19:35:49.497645   14891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 19:35:49.497664   14891 main.go:141] libmachine: Successfully made call to close driver server
	I1216 19:35:49.497675   14891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 19:35:50.072928   14891 pod_ready.go:103] pod "coredns-668d6bf9bc-jqhz4" in "kube-system" namespace has status "Ready":"False"
	I1216 19:35:50.597435   14891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.645473122s)
	I1216 19:35:50.597485   14891 main.go:141] libmachine: Making call to close driver server
	I1216 19:35:50.597497   14891 main.go:141] libmachine: (addons-618388) Calling .Close
	I1216 19:35:50.597751   14891 main.go:141] libmachine: Successfully made call to close driver server
	I1216 19:35:50.597772   14891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 19:35:50.597781   14891 main.go:141] libmachine: Making call to close driver server
	I1216 19:35:50.597790   14891 main.go:141] libmachine: (addons-618388) Calling .Close
	I1216 19:35:50.598056   14891 main.go:141] libmachine: Successfully made call to close driver server
	I1216 19:35:50.598076   14891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 19:35:52.135522   14891 pod_ready.go:103] pod "coredns-668d6bf9bc-jqhz4" in "kube-system" namespace has status "Ready":"False"
	I1216 19:35:52.244753   14891 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1216 19:35:52.244798   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
	I1216 19:35:52.247895   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:52.248316   14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
	I1216 19:35:52.248347   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:52.248510   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
	I1216 19:35:52.248725   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
	I1216 19:35:52.248900   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
	I1216 19:35:52.249046   14891 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa Username:docker}
	I1216 19:35:52.858684   14891 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1216 19:35:52.986747   14891 addons.go:234] Setting addon gcp-auth=true in "addons-618388"
	I1216 19:35:52.986811   14891 host.go:66] Checking if "addons-618388" exists ...
	I1216 19:35:52.987279   14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:35:52.987323   14891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:35:53.003709   14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33267
	I1216 19:35:53.004251   14891 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:35:53.004816   14891 main.go:141] libmachine: Using API Version  1
	I1216 19:35:53.004843   14891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:35:53.005171   14891 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:35:53.005629   14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:35:53.005655   14891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:35:53.021457   14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40217
	I1216 19:35:53.021913   14891 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:35:53.022423   14891 main.go:141] libmachine: Using API Version  1
	I1216 19:35:53.022446   14891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:35:53.022746   14891 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:35:53.022946   14891 main.go:141] libmachine: (addons-618388) Calling .GetState
	I1216 19:35:53.024670   14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
	I1216 19:35:53.024914   14891 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1216 19:35:53.024941   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
	I1216 19:35:53.028126   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:53.028571   14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
	I1216 19:35:53.028599   14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
	I1216 19:35:53.028712   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
	I1216 19:35:53.028899   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
	I1216 19:35:53.029080   14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
	I1216 19:35:53.029270   14891 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa Username:docker}
	I1216 19:35:53.633152   14891 pod_ready.go:93] pod "coredns-668d6bf9bc-jqhz4" in "kube-system" namespace has status "Ready":"True"
	I1216 19:35:53.633185   14891 pod_ready.go:82] duration metric: took 7.570536299s for pod "coredns-668d6bf9bc-jqhz4" in "kube-system" namespace to be "Ready" ...
	I1216 19:35:53.633200   14891 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-tf9ml" in "kube-system" namespace to be "Ready" ...
	I1216 19:35:54.876398   14891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.884865187s)
	I1216 19:35:54.876456   14891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.849525386s)
	I1216 19:35:54.876495   14891 main.go:141] libmachine: Making call to close driver server
	I1216 19:35:54.876507   14891 main.go:141] libmachine: (addons-618388) Calling .Close
	I1216 19:35:54.876529   14891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.723003936s)
	I1216 19:35:54.876463   14891 main.go:141] libmachine: Making call to close driver server
	I1216 19:35:54.876560   14891 main.go:141] libmachine: Making call to close driver server
	I1216 19:35:54.876566   14891 main.go:141] libmachine: (addons-618388) Calling .Close
	I1216 19:35:54.876569   14891 main.go:141] libmachine: (addons-618388) Calling .Close
	I1216 19:35:54.876632   14891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (8.631910629s)
	I1216 19:35:54.876652   14891 main.go:141] libmachine: Making call to close driver server
	I1216 19:35:54.876660   14891 main.go:141] libmachine: (addons-618388) Calling .Close
	I1216 19:35:54.876682   14891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.476186454s)
	I1216 19:35:54.876706   14891 main.go:141] libmachine: Making call to close driver server
	I1216 19:35:54.876721   14891 main.go:141] libmachine: (addons-618388) Calling .Close
	I1216 19:35:54.876756   14891 main.go:141] libmachine: Successfully made call to close driver server
	I1216 19:35:54.876765   14891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 19:35:54.876772   14891 main.go:141] libmachine: Making call to close driver server
	I1216 19:35:54.876778   14891 main.go:141] libmachine: (addons-618388) Calling .Close
	I1216 19:35:54.876778   14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
	I1216 19:35:54.876840   14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
	I1216 19:35:54.876857   14891 main.go:141] libmachine: Successfully made call to close driver server
	I1216 19:35:54.876862   14891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 19:35:54.876870   14891 main.go:141] libmachine: Making call to close driver server
	I1216 19:35:54.876875   14891 main.go:141] libmachine: (addons-618388) Calling .Close
	I1216 19:35:54.876917   14891 main.go:141] libmachine: Successfully made call to close driver server
	I1216 19:35:54.876923   14891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 19:35:54.876930   14891 main.go:141] libmachine: Making call to close driver server
	I1216 19:35:54.876948   14891 main.go:141] libmachine: (addons-618388) Calling .Close
	I1216 19:35:54.876986   14891 main.go:141] libmachine: Successfully made call to close driver server
	I1216 19:35:54.876997   14891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 19:35:54.877006   14891 main.go:141] libmachine: Making call to close driver server
	I1216 19:35:54.877014   14891 main.go:141] libmachine: (addons-618388) Calling .Close
	I1216 19:35:54.877012   14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
	I1216 19:35:54.877079   14891 main.go:141] libmachine: Successfully made call to close driver server
	I1216 19:35:54.877087   14891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 19:35:54.877108   14891 main.go:141] libmachine: Making call to close driver server
	I1216 19:35:54.877117   14891 main.go:141] libmachine: (addons-618388) Calling .Close
	I1216 19:35:54.877194   14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
	I1216 19:35:54.877218   14891 main.go:141] libmachine: Successfully made call to close driver server
	I1216 19:35:54.877225   14891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 19:35:54.877235   14891 addons.go:475] Verifying addon ingress=true in "addons-618388"
	I1216 19:35:54.878916   14891 main.go:141] libmachine: Successfully made call to close driver server
	I1216 19:35:54.878942   14891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 19:35:54.878962   14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
	I1216 19:35:54.879030   14891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.28650748s)
	I1216 19:35:54.879068   14891 main.go:141] libmachine: Making call to close driver server
	I1216 19:35:54.879080   14891 main.go:141] libmachine: (addons-618388) Calling .Close
	I1216 19:35:54.879126   14891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.206927167s)
	I1216 19:35:54.879144   14891 main.go:141] libmachine: Making call to close driver server
	I1216 19:35:54.879154   14891 main.go:141] libmachine: (addons-618388) Calling .Close
	I1216 19:35:54.879192   14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
	I1216 19:35:54.879208   14891 main.go:141] libmachine: Successfully made call to close driver server
	I1216 19:35:54.879207   14891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.953183187s)
	I1216 19:35:54.879217   14891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 19:35:54.879227   14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
	I1216 19:35:54.879230   14891 main.go:141] libmachine: Making call to close driver server
	I1216 19:35:54.879260   14891 main.go:141] libmachine: (addons-618388) Calling .Close
	I1216 19:35:54.879293   14891 main.go:141] libmachine: Successfully made call to close driver server
	I1216 19:35:54.879302   14891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 19:35:54.879309   14891 main.go:141] libmachine: Making call to close driver server
	I1216 19:35:54.879316   14891 main.go:141] libmachine: (addons-618388) Calling .Close
	I1216 19:35:54.879355   14891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.802721502s)
	W1216 19:35:54.879383   14891 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1216 19:35:54.879406   14891 retry.go:31] will retry after 198.781214ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1216 19:35:54.879438   14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
	I1216 19:35:54.879459   14891 main.go:141] libmachine: Successfully made call to close driver server
	I1216 19:35:54.879466   14891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 19:35:54.879482   14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
	I1216 19:35:54.879489   14891 main.go:141] libmachine: Successfully made call to close driver server
	I1216 19:35:54.879493   14891 main.go:141] libmachine: Making call to close driver server
	I1216 19:35:54.879495   14891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 19:35:54.879501   14891 main.go:141] libmachine: (addons-618388) Calling .Close
	I1216 19:35:54.879503   14891 main.go:141] libmachine: Making call to close driver server
	I1216 19:35:54.879512   14891 main.go:141] libmachine: (addons-618388) Calling .Close
	I1216 19:35:54.879546   14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
	I1216 19:35:54.880922   14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
	I1216 19:35:54.880937   14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
	I1216 19:35:54.880962   14891 main.go:141] libmachine: Successfully made call to close driver server
	I1216 19:35:54.880966   14891 main.go:141] libmachine: Successfully made call to close driver server
	I1216 19:35:54.880970   14891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 19:35:54.880973   14891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 19:35:54.881146   14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
	I1216 19:35:54.881177   14891 main.go:141] libmachine: Successfully made call to close driver server
	I1216 19:35:54.881184   14891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 19:35:54.881192   14891 addons.go:475] Verifying addon metrics-server=true in "addons-618388"
	I1216 19:35:54.882338   14891 main.go:141] libmachine: Successfully made call to close driver server
	I1216 19:35:54.882351   14891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 19:35:54.882530   14891 out.go:177] * Verifying ingress addon...
	I1216 19:35:54.882539   14891 main.go:141] libmachine: Successfully made call to close driver server
	I1216 19:35:54.882550   14891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 19:35:54.882559   14891 addons.go:475] Verifying addon registry=true in "addons-618388"
	I1216 19:35:54.883566   14891 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-618388 service yakd-dashboard -n yakd-dashboard
	
	I1216 19:35:54.884456   14891 out.go:177] * Verifying registry addon...
	I1216 19:35:54.885480   14891 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1216 19:35:54.887072   14891 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1216 19:35:54.913771   14891 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1216 19:35:54.913795   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:35:54.919312   14891 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1216 19:35:54.919342   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:35:54.931699   14891 main.go:141] libmachine: Making call to close driver server
	I1216 19:35:54.931725   14891 main.go:141] libmachine: (addons-618388) Calling .Close
	I1216 19:35:54.931832   14891 main.go:141] libmachine: Making call to close driver server
	I1216 19:35:54.931852   14891 main.go:141] libmachine: (addons-618388) Calling .Close
	I1216 19:35:54.932070   14891 main.go:141] libmachine: Successfully made call to close driver server
	I1216 19:35:54.932128   14891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 19:35:54.932157   14891 main.go:141] libmachine: Successfully made call to close driver server
	I1216 19:35:54.932173   14891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 19:35:54.932190   14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
	W1216 19:35:54.932211   14891 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1216 19:35:55.079067   14891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 19:35:55.394453   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:35:55.394471   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:35:55.653170   14891 pod_ready.go:103] pod "coredns-668d6bf9bc-tf9ml" in "kube-system" namespace has status "Ready":"False"
	I1216 19:35:55.920441   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:35:55.946986   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:35:56.151678   14891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.399740809s)
	I1216 19:35:56.151744   14891 main.go:141] libmachine: Making call to close driver server
	I1216 19:35:56.151762   14891 main.go:141] libmachine: (addons-618388) Calling .Close
	I1216 19:35:56.151760   14891 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.126802385s)
	I1216 19:35:56.152023   14891 main.go:141] libmachine: Successfully made call to close driver server
	I1216 19:35:56.152070   14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
	I1216 19:35:56.152078   14891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 19:35:56.152094   14891 main.go:141] libmachine: Making call to close driver server
	I1216 19:35:56.152101   14891 main.go:141] libmachine: (addons-618388) Calling .Close
	I1216 19:35:56.152358   14891 main.go:141] libmachine: Successfully made call to close driver server
	I1216 19:35:56.152376   14891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 19:35:56.152387   14891 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-618388"
	I1216 19:35:56.154109   14891 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1216 19:35:56.155166   14891 out.go:177] * Verifying csi-hostpath-driver addon...
	I1216 19:35:56.156897   14891 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1216 19:35:56.157613   14891 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1216 19:35:56.158095   14891 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1216 19:35:56.158114   14891 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1216 19:35:56.221207   14891 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1216 19:35:56.221235   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:35:56.330703   14891 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1216 19:35:56.330726   14891 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1216 19:35:56.416259   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:35:56.416985   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:35:56.446561   14891 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1216 19:35:56.446591   14891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1216 19:35:56.628911   14891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1216 19:35:56.666666   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:35:56.890676   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:35:56.892748   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:35:57.161936   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:35:57.390384   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:35:57.390520   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:35:57.603624   14891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.524506001s)
	I1216 19:35:57.603689   14891 main.go:141] libmachine: Making call to close driver server
	I1216 19:35:57.603700   14891 main.go:141] libmachine: (addons-618388) Calling .Close
	I1216 19:35:57.603942   14891 main.go:141] libmachine: Successfully made call to close driver server
	I1216 19:35:57.603965   14891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 19:35:57.603977   14891 main.go:141] libmachine: Making call to close driver server
	I1216 19:35:57.603986   14891 main.go:141] libmachine: (addons-618388) Calling .Close
	I1216 19:35:57.603990   14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
	I1216 19:35:57.604218   14891 main.go:141] libmachine: Successfully made call to close driver server
	I1216 19:35:57.604234   14891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 19:35:57.662900   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:35:57.889993   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:35:57.890314   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:35:58.170660   14891 pod_ready.go:103] pod "coredns-668d6bf9bc-tf9ml" in "kube-system" namespace has status "Ready":"False"
	I1216 19:35:58.176826   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:35:58.397562   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:35:58.418425   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:35:58.612884   14891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.983928357s)
	I1216 19:35:58.612949   14891 main.go:141] libmachine: Making call to close driver server
	I1216 19:35:58.612968   14891 main.go:141] libmachine: (addons-618388) Calling .Close
	I1216 19:35:58.613359   14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
	I1216 19:35:58.613362   14891 main.go:141] libmachine: Successfully made call to close driver server
	I1216 19:35:58.613394   14891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 19:35:58.613411   14891 main.go:141] libmachine: Making call to close driver server
	I1216 19:35:58.613424   14891 main.go:141] libmachine: (addons-618388) Calling .Close
	I1216 19:35:58.613632   14891 main.go:141] libmachine: Successfully made call to close driver server
	I1216 19:35:58.613646   14891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 19:35:58.614773   14891 addons.go:475] Verifying addon gcp-auth=true in "addons-618388"
	I1216 19:35:58.616512   14891 out.go:177] * Verifying gcp-auth addon...
	I1216 19:35:58.618420   14891 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1216 19:35:58.631884   14891 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1216 19:35:58.631903   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:35:58.737791   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:35:58.893170   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:35:58.894761   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:35:59.122563   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:35:59.139221   14891 pod_ready.go:98] pod "coredns-668d6bf9bc-tf9ml" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-16 19:35:58 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-16 19:35:45 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-16 19:35:45 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-16 19:35:45 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-16 19:35:45 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.82 HostIPs:[{IP:192.168.39.
82}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-12-16 19:35:45 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-12-16 19:35:51 +0000 UTC,FinishedAt:2024-12-16 19:35:57 +0000 UTC,ContainerID:cri-o://bf92518ad21c8f1d35a45b7087078c9626af6ad30a3caacf3e0448ed04bf3ef6,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://bf92518ad21c8f1d35a45b7087078c9626af6ad30a3caacf3e0448ed04bf3ef6 Started:0xc0007a1920 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc00069e220} {Name:kube-api-access-84tkx MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc00069e250}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1216 19:35:59.139263   14891 pod_ready.go:82] duration metric: took 5.506055119s for pod "coredns-668d6bf9bc-tf9ml" in "kube-system" namespace to be "Ready" ...
	E1216 19:35:59.139274   14891 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-668d6bf9bc-tf9ml" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-16 19:35:58 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-16 19:35:45 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-16 19:35:45 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-16 19:35:45 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-16 19:35:45 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.82 HostIPs:[{IP:192.168.39.82}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-12-16 19:35:45 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-12-16 19:35:51 +0000 UTC,FinishedAt:2024-12-16 19:35:57 +0000 UTC,ContainerID:cri-o://bf92518ad21c8f1d35a45b7087078c9626af6ad30a3caacf3e0448ed04bf3ef6,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://bf92518ad21c8f1d35a45b7087078c9626af6ad30a3caacf3e0448ed04bf3ef6 Started:0xc0007a1920 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc00069e220} {Name:kube-api-access-84tkx MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc00069e250}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1216 19:35:59.139284   14891 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-618388" in "kube-system" namespace to be "Ready" ...
	I1216 19:35:59.145293   14891 pod_ready.go:93] pod "etcd-addons-618388" in "kube-system" namespace has status "Ready":"True"
	I1216 19:35:59.145325   14891 pod_ready.go:82] duration metric: took 6.032862ms for pod "etcd-addons-618388" in "kube-system" namespace to be "Ready" ...
	I1216 19:35:59.145339   14891 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-618388" in "kube-system" namespace to be "Ready" ...
	I1216 19:35:59.150370   14891 pod_ready.go:93] pod "kube-apiserver-addons-618388" in "kube-system" namespace has status "Ready":"True"
	I1216 19:35:59.150393   14891 pod_ready.go:82] duration metric: took 5.045573ms for pod "kube-apiserver-addons-618388" in "kube-system" namespace to be "Ready" ...
	I1216 19:35:59.150405   14891 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-618388" in "kube-system" namespace to be "Ready" ...
	I1216 19:35:59.155518   14891 pod_ready.go:93] pod "kube-controller-manager-addons-618388" in "kube-system" namespace has status "Ready":"True"
	I1216 19:35:59.155542   14891 pod_ready.go:82] duration metric: took 5.129856ms for pod "kube-controller-manager-addons-618388" in "kube-system" namespace to be "Ready" ...
	I1216 19:35:59.155554   14891 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8t666" in "kube-system" namespace to be "Ready" ...
	I1216 19:35:59.160983   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:35:59.160995   14891 pod_ready.go:93] pod "kube-proxy-8t666" in "kube-system" namespace has status "Ready":"True"
	I1216 19:35:59.161025   14891 pod_ready.go:82] duration metric: took 5.463312ms for pod "kube-proxy-8t666" in "kube-system" namespace to be "Ready" ...
	I1216 19:35:59.161037   14891 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-618388" in "kube-system" namespace to be "Ready" ...
	I1216 19:35:59.394369   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:35:59.394719   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:35:59.537077   14891 pod_ready.go:93] pod "kube-scheduler-addons-618388" in "kube-system" namespace has status "Ready":"True"
	I1216 19:35:59.537103   14891 pod_ready.go:82] duration metric: took 376.029382ms for pod "kube-scheduler-addons-618388" in "kube-system" namespace to be "Ready" ...
	I1216 19:35:59.537110   14891 pod_ready.go:39] duration metric: took 13.502971624s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 19:35:59.537126   14891 api_server.go:52] waiting for apiserver process to appear ...
	I1216 19:35:59.537181   14891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 19:35:59.560419   14891 api_server.go:72] duration metric: took 14.342379461s to wait for apiserver process to appear ...
	I1216 19:35:59.560443   14891 api_server.go:88] waiting for apiserver healthz status ...
	I1216 19:35:59.560462   14891 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
	I1216 19:35:59.565434   14891 api_server.go:279] https://192.168.39.82:8443/healthz returned 200:
	ok
	I1216 19:35:59.567343   14891 api_server.go:141] control plane version: v1.32.0
	I1216 19:35:59.567377   14891 api_server.go:131] duration metric: took 6.927743ms to wait for apiserver health ...
	I1216 19:35:59.567384   14891 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 19:35:59.622774   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:35:59.662392   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:35:59.741680   14891 system_pods.go:59] 18 kube-system pods found
	I1216 19:35:59.741714   14891 system_pods.go:61] "amd-gpu-device-plugin-t9xls" [998af96b-a6d5-438c-8ffb-97b11028796f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1216 19:35:59.741721   14891 system_pods.go:61] "coredns-668d6bf9bc-jqhz4" [1d168f2c-2593-4ee9-a909-ced7e32adca5] Running
	I1216 19:35:59.741728   14891 system_pods.go:61] "csi-hostpath-attacher-0" [a6ff89b4-0d31-4e72-826a-12cf756c7e4c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1216 19:35:59.741734   14891 system_pods.go:61] "csi-hostpath-resizer-0" [7c08e8c6-a4d2-48d1-8641-fce068dbafa2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1216 19:35:59.741742   14891 system_pods.go:61] "csi-hostpathplugin-fmz2d" [c682dd96-c52d-4c59-8b61-6fb5e8f9027a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1216 19:35:59.741746   14891 system_pods.go:61] "etcd-addons-618388" [5e5b4607-bc43-46f6-b1e1-2c096e3f4431] Running
	I1216 19:35:59.741751   14891 system_pods.go:61] "kube-apiserver-addons-618388" [6f76d8bc-1a39-45dc-b974-21776046dccf] Running
	I1216 19:35:59.741754   14891 system_pods.go:61] "kube-controller-manager-addons-618388" [89b9f73d-e0dd-4958-b78d-eec172386bc6] Running
	I1216 19:35:59.741759   14891 system_pods.go:61] "kube-ingress-dns-minikube" [913a8e1d-d56f-4b34-89b0-afa60ef45d1a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1216 19:35:59.741764   14891 system_pods.go:61] "kube-proxy-8t666" [397ca8ee-6184-4c67-9cc2-df6a118f9ec7] Running
	I1216 19:35:59.741768   14891 system_pods.go:61] "kube-scheduler-addons-618388" [26b2db05-10ed-42f8-96f7-3345931f70a9] Running
	I1216 19:35:59.741774   14891 system_pods.go:61] "metrics-server-7fbb699795-c995d" [4213f921-b992-420b-bd80-e0ad67a43567] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 19:35:59.741780   14891 system_pods.go:61] "nvidia-device-plugin-daemonset-fmpb4" [e8d4bb90-d999-45bf-96e0-304cf36a3790] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1216 19:35:59.741786   14891 system_pods.go:61] "registry-6c86875c6f-lxvbn" [ec5514ad-5010-4fd5-bae5-fa96610b47b8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 19:35:59.741793   14891 system_pods.go:61] "registry-proxy-49ln5" [29c16cb5-dd77-4e42-a748-3d4a7a80fb9c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1216 19:35:59.741803   14891 system_pods.go:61] "snapshot-controller-68b874b76f-dzm7s" [4a9bc6bd-7ed3-4b60-9f26-33fb55f94e9e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 19:35:59.741809   14891 system_pods.go:61] "snapshot-controller-68b874b76f-qp7nw" [c8817fea-96d6-4405-8c50-674c5e47b8c7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 19:35:59.741813   14891 system_pods.go:61] "storage-provisioner" [8df30b29-628b-40a9-85a1-0a2edb5357ab] Running
	I1216 19:35:59.741820   14891 system_pods.go:74] duration metric: took 174.430048ms to wait for pod list to return data ...
	I1216 19:35:59.741830   14891 default_sa.go:34] waiting for default service account to be created ...
	I1216 19:35:59.889133   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:35:59.890625   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:35:59.937013   14891 default_sa.go:45] found service account: "default"
	I1216 19:35:59.937039   14891 default_sa.go:55] duration metric: took 195.20084ms for default service account to be created ...
	I1216 19:35:59.937047   14891 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 19:36:00.122252   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:00.141373   14891 system_pods.go:86] 18 kube-system pods found
	I1216 19:36:00.141413   14891 system_pods.go:89] "amd-gpu-device-plugin-t9xls" [998af96b-a6d5-438c-8ffb-97b11028796f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1216 19:36:00.141422   14891 system_pods.go:89] "coredns-668d6bf9bc-jqhz4" [1d168f2c-2593-4ee9-a909-ced7e32adca5] Running
	I1216 19:36:00.141433   14891 system_pods.go:89] "csi-hostpath-attacher-0" [a6ff89b4-0d31-4e72-826a-12cf756c7e4c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1216 19:36:00.141442   14891 system_pods.go:89] "csi-hostpath-resizer-0" [7c08e8c6-a4d2-48d1-8641-fce068dbafa2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1216 19:36:00.141474   14891 system_pods.go:89] "csi-hostpathplugin-fmz2d" [c682dd96-c52d-4c59-8b61-6fb5e8f9027a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1216 19:36:00.141484   14891 system_pods.go:89] "etcd-addons-618388" [5e5b4607-bc43-46f6-b1e1-2c096e3f4431] Running
	I1216 19:36:00.141493   14891 system_pods.go:89] "kube-apiserver-addons-618388" [6f76d8bc-1a39-45dc-b974-21776046dccf] Running
	I1216 19:36:00.141508   14891 system_pods.go:89] "kube-controller-manager-addons-618388" [89b9f73d-e0dd-4958-b78d-eec172386bc6] Running
	I1216 19:36:00.141516   14891 system_pods.go:89] "kube-ingress-dns-minikube" [913a8e1d-d56f-4b34-89b0-afa60ef45d1a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1216 19:36:00.141522   14891 system_pods.go:89] "kube-proxy-8t666" [397ca8ee-6184-4c67-9cc2-df6a118f9ec7] Running
	I1216 19:36:00.141529   14891 system_pods.go:89] "kube-scheduler-addons-618388" [26b2db05-10ed-42f8-96f7-3345931f70a9] Running
	I1216 19:36:00.141539   14891 system_pods.go:89] "metrics-server-7fbb699795-c995d" [4213f921-b992-420b-bd80-e0ad67a43567] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 19:36:00.141554   14891 system_pods.go:89] "nvidia-device-plugin-daemonset-fmpb4" [e8d4bb90-d999-45bf-96e0-304cf36a3790] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1216 19:36:00.141567   14891 system_pods.go:89] "registry-6c86875c6f-lxvbn" [ec5514ad-5010-4fd5-bae5-fa96610b47b8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 19:36:00.141576   14891 system_pods.go:89] "registry-proxy-49ln5" [29c16cb5-dd77-4e42-a748-3d4a7a80fb9c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1216 19:36:00.141588   14891 system_pods.go:89] "snapshot-controller-68b874b76f-dzm7s" [4a9bc6bd-7ed3-4b60-9f26-33fb55f94e9e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 19:36:00.141601   14891 system_pods.go:89] "snapshot-controller-68b874b76f-qp7nw" [c8817fea-96d6-4405-8c50-674c5e47b8c7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 19:36:00.141608   14891 system_pods.go:89] "storage-provisioner" [8df30b29-628b-40a9-85a1-0a2edb5357ab] Running
	I1216 19:36:00.141623   14891 system_pods.go:126] duration metric: took 204.568553ms to wait for k8s-apps to be running ...
	I1216 19:36:00.141636   14891 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 19:36:00.141689   14891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 19:36:00.162881   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:00.192172   14891 system_svc.go:56] duration metric: took 50.528142ms WaitForService to wait for kubelet
	I1216 19:36:00.192198   14891 kubeadm.go:582] duration metric: took 14.974162621s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 19:36:00.192217   14891 node_conditions.go:102] verifying NodePressure condition ...
	I1216 19:36:00.348810   14891 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 19:36:00.348842   14891 node_conditions.go:123] node cpu capacity is 2
	I1216 19:36:00.348853   14891 node_conditions.go:105] duration metric: took 156.630414ms to run NodePressure ...
	I1216 19:36:00.348865   14891 start.go:241] waiting for startup goroutines ...
	I1216 19:36:00.390768   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:00.391428   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:00.622424   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:00.662096   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:00.891761   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:00.891910   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:01.122067   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:01.161609   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:01.389753   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:01.391529   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:01.622878   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:01.661776   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:01.894410   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:01.895235   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:02.122686   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:02.162346   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:02.390099   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:02.391269   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:02.623068   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:02.661905   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:02.890109   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:02.892266   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:03.122338   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:03.162243   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:03.389955   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:03.391023   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:03.622265   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:03.724776   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:03.891464   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:03.891750   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:04.122664   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:04.164370   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:04.389479   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:04.391363   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:04.623153   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:04.662098   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:04.891520   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:04.891761   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:05.122682   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:05.162325   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:05.393158   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:05.393385   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:05.624924   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:05.661744   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:05.890412   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:05.891530   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:06.122800   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:06.163195   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:06.391796   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:06.392361   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:06.623200   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:06.663092   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:06.889696   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:06.892054   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:07.122888   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:07.163116   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:07.391691   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:07.392118   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:07.621614   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:07.662996   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:07.890959   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:07.891838   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:08.121527   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:08.162600   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:08.390260   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:08.390747   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:08.623050   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:08.661867   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:08.889805   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:08.892438   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:09.122622   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:09.163733   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:09.393027   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:09.393817   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:09.926093   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:09.926212   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:09.926481   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:09.927041   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:10.122661   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:10.162944   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:10.391125   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:10.392602   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:10.622453   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:10.671480   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:10.889497   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:10.890978   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:11.121660   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:11.162774   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:11.390506   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:11.391730   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:11.621586   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:11.662409   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:11.890761   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:11.890981   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:12.284262   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:12.285568   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:12.393567   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:12.395963   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:12.621851   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:12.662953   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:12.890845   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:12.891667   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:13.123057   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:13.162330   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:13.390772   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:13.392100   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:13.622496   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:13.662961   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:13.890239   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:13.894222   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:14.122541   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:14.163432   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:14.529204   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:14.529760   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:14.627982   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:14.662320   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:14.889278   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:14.890592   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:15.123785   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:15.164145   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:15.390904   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:15.391099   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:15.621966   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:15.661981   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:15.890605   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:15.891141   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:16.122728   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:16.163770   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:16.390780   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:16.391217   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:16.720449   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:16.720925   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:16.890888   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:16.891478   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:17.122467   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:17.162674   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:17.392130   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:17.392709   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:17.622200   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:17.662531   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:17.892075   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:17.892534   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:18.122653   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:18.162984   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:18.391307   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:18.392335   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:18.621981   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:18.662094   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:18.891374   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:18.891973   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:19.126668   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:19.164969   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:19.391719   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:19.392178   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:19.621878   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:19.664281   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:19.889761   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:19.892171   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:20.122123   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:20.223709   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:20.390868   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:20.391312   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:20.622601   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:20.663621   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:20.890977   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:20.891712   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:21.122716   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:21.162801   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:21.389760   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:21.391447   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:21.624305   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:21.662128   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:21.890354   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:21.891357   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:22.122063   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:22.162765   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:22.434216   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:22.434427   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:22.621985   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:22.662084   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:22.893960   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:22.991368   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:23.122598   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:23.162627   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:23.390580   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:23.392806   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:23.621592   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:23.662530   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:23.890194   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:23.892116   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:24.122979   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:24.163189   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:24.391328   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:24.392874   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:24.621765   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:24.663393   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:24.892827   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:24.893475   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:25.123555   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:25.167902   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:25.390847   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:25.391007   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:25.622210   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:25.661897   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:25.892554   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:25.892768   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:26.123346   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:26.161959   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:26.390428   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:26.392200   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:26.621882   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:26.662059   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:26.890443   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:26.892305   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:27.133091   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:27.164511   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:27.390661   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:27.393336   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:27.622494   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:27.662384   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:27.907581   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:27.908097   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:28.123139   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:28.163482   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:28.390241   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:28.393531   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:28.623784   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:28.662657   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:28.891476   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:28.891857   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:29.122858   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:29.163695   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:29.392193   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:29.392777   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:29.622418   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:29.664379   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:29.892472   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:29.892762   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:30.122198   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:30.162576   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:30.390267   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:30.391353   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:30.623079   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:30.663391   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:30.890199   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:30.891347   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:31.121724   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:31.161785   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:31.391698   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:31.392422   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:31.622270   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:31.662075   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:31.889698   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:31.890978   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:32.122724   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:32.162858   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:32.390276   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:32.392883   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:32.622815   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:32.664851   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:33.030774   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:33.032086   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:33.152388   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:33.167449   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:33.392006   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:33.392621   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:33.622487   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:33.662806   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:33.890173   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:33.892200   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:34.122091   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:34.162631   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:34.391267   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:34.392024   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:34.622356   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:34.663443   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:34.891897   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:34.893835   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:35.122332   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:35.166239   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:35.390869   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:35.392074   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:35.621828   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:35.664597   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:35.892337   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:35.897988   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:36.122592   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:36.164635   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:36.389381   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:36.390997   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:36.621947   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:36.663598   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:36.891558   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:36.895307   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:37.122981   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:37.162049   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:37.390566   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:37.391933   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:37.621701   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:37.662812   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:37.890271   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:37.890279   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:38.122818   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:38.162767   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:38.391174   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:38.391174   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:38.622733   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:38.663633   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:38.891438   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:38.891566   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:39.122640   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:39.162784   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:39.478504   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:39.478622   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:39.624178   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:39.724855   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:39.890392   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:39.891524   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:40.121732   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:40.166528   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:40.401465   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:40.401469   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:40.623088   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:40.662363   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:40.891217   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:40.892110   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:41.121697   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:41.162860   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:41.390546   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:41.393038   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:41.622012   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:41.662205   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:41.892282   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:41.894263   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:42.123360   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:42.162295   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:42.391233   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:42.391993   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:42.623542   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:42.724906   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:42.891286   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:42.891618   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 19:36:43.122964   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:43.162484   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:43.394467   14891 kapi.go:107] duration metric: took 48.507351034s to wait for kubernetes.io/minikube-addons=registry ...
	I1216 19:36:43.394578   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:43.623016   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:43.662226   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:43.889564   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:44.122471   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:44.162662   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:44.390929   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:44.622649   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:44.662440   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:44.889332   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:45.121963   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:45.161893   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:45.389632   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:45.622790   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:45.662854   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:45.890229   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:46.122528   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:46.162752   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:46.390422   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:46.622557   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:46.663097   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:46.889957   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:47.122083   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:47.162387   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:47.390396   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:47.624476   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:47.662328   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:47.891612   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:48.122610   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:48.162972   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:48.391286   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:48.623090   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:48.662175   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:48.889361   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:49.122787   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:49.162754   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:49.390059   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:49.622530   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:49.794685   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:49.894843   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:50.123037   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:50.224978   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:50.391336   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:50.623309   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:50.662379   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:50.890426   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:51.121987   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:51.162146   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:51.402889   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:51.622818   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:51.665739   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:51.890496   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:52.123294   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:52.225394   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:52.389577   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:52.622070   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:52.662794   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:52.890189   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:53.121703   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:53.163482   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:53.390306   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:53.623958   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:53.662395   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:53.890450   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:54.129972   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:54.237375   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:54.389828   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:54.622885   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:54.662666   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:54.890276   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:55.124818   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:55.163228   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:55.390841   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:55.622328   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:55.665311   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:55.890586   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:56.123512   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:56.162132   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:56.390183   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:56.622617   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:56.663979   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:56.890154   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:57.128701   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:57.163574   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:57.390136   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:57.623092   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:57.662086   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:57.891132   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:58.122077   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:58.162083   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:58.390403   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:58.622513   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:58.662879   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:58.890759   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:59.122178   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:59.164506   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:59.398537   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:36:59.626687   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:36:59.740877   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:36:59.894055   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:37:00.124268   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:37:00.226057   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:37:00.392111   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:37:00.621793   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:37:00.662869   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:37:00.890644   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:37:01.123542   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:37:01.163151   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:37:01.391049   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:37:01.621631   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:37:01.663074   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:37:01.891007   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:37:02.122547   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:37:02.163307   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:37:02.389687   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:37:02.623756   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:37:02.663319   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:37:02.891647   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:37:03.122443   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:37:03.162700   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:37:03.390031   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:37:03.874029   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:37:03.875337   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:37:03.890107   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:37:04.122016   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:37:04.162425   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:37:04.390353   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:37:04.621504   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:37:04.662458   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:37:04.890460   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:37:05.121997   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:37:05.162890   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:37:05.390593   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:37:05.622582   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:37:05.662850   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:37:05.890118   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:37:06.122736   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:37:06.162893   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:37:06.391529   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:37:06.622355   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:37:06.662403   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:37:06.891677   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:37:07.123171   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:37:07.162220   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:37:07.389800   14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 19:37:07.630395   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:37:07.664404   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:37:07.890827   14891 kapi.go:107] duration metric: took 1m13.005342009s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1216 19:37:08.123366   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:37:08.163060   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:37:08.627418   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:37:08.729376   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:37:09.122663   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:37:09.162691   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:37:09.622068   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:37:09.663092   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:37:10.123027   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:37:10.224731   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:37:10.623229   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 19:37:10.671272   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:37:11.122165   14891 kapi.go:107] duration metric: took 1m12.503744021s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1216 19:37:11.124404   14891 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-618388 cluster.
	I1216 19:37:11.125969   14891 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1216 19:37:11.127457   14891 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1216 19:37:11.162118   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:37:11.662394   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:37:12.162104   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:37:12.670233   14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 19:37:13.162388   14891 kapi.go:107] duration metric: took 1m17.004772258s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1216 19:37:13.164371   14891 out.go:177] * Enabled addons: amd-gpu-device-plugin, ingress-dns, nvidia-device-plugin, storage-provisioner, cloud-spanner, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1216 19:37:13.165865   14891 addons.go:510] duration metric: took 1m27.947743244s for enable addons: enabled=[amd-gpu-device-plugin ingress-dns nvidia-device-plugin storage-provisioner cloud-spanner metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1216 19:37:13.165905   14891 start.go:246] waiting for cluster config update ...
	I1216 19:37:13.165923   14891 start.go:255] writing updated cluster config ...
	I1216 19:37:13.166194   14891 ssh_runner.go:195] Run: rm -f paused
	I1216 19:37:13.218386   14891 start.go:600] kubectl: 1.32.0, cluster: 1.32.0 (minor skew: 0)
	I1216 19:37:13.220431   14891 out.go:177] * Done! kubectl is now configured to use "addons-618388" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.695934483Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d6a45c62-9537-4be8-97d7-cf3536788fcb name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.698129818Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734378011698097956,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595926,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d6a45c62-9537-4be8-97d7-cf3536788fcb name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.698966302Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=60b5d690-be24-4b3f-ab44-c4bfe7d89ebd name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.699026911Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=60b5d690-be24-4b3f-ab44-c4bfe7d89ebd name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.699347293Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a83ce5fc3bf5d5eed034bb5e58b580fb6e83c8c250e63aa9e018497aec331259,PodSandboxId:012cfbaf366c7682320ff9b20e114008fbc8a2f619c0f2fb59d052d9c3dbab82,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1734377873939416092,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 004073d4-980e-4fd9-ad94-dc4598f84218,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:373b7d1d5c5a8decd5d0fede1509e853236e98f53519a70b5d20098e800239f5,PodSandboxId:b9d05acaaff9aa586fc1e7693ef04f13cb1b3c7d5b94c8d90ef5ca226eec4d83,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1734377835495900239,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 12fe933d-2f3a-4b23-9e9d-2faa73db353b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b941a1e632cbc8f4a8a5493e67bf24cccf3be8fccf534e7fd10e567f414c58,PodSandboxId:3b7647453926fa4968304d00504647f8d88639d93cd6984c9cb917815d2d59a6,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1734377826834387420,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-rtb85,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 918046a9-d03b-4717-8083-f1055bb8fa1e,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0389c8ef56a302abbd165e6d8c2aba1a54ed92bebb9e68df36c196e48f70b39a,PodSandboxId:ae114d6dcd6a359f99ccef2a0284d2896b6901240f689bb95babf1ae940d0ae9,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1734377812564024360,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-lsm5p,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 29c0b712-49ac-4316-b9d3-f602609b2309,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd2e36bd4d4598dd05712cdd0088e0da4a6a77814baac4e4b508a8fb57c5f9c4,PodSandboxId:784dd69c8e3614ccd13d710bd822fd1bc48e42ecaa5d7e4d0d5e1dfab67dd2f5,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1734377812020186227,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sgp7s,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 927e60ad-2b4f-4cb6-9f3d-fdc73a5b0b8d,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eac4f297a677bfe4a8426be0287541519521ff1fa4c953c559bb2a61cffe7c51,PodSandboxId:5347fbb80356ab6d733eb639850a88c731f4baa8dc7b8d431ca720fbd038b347,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1734377804295686677,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-f8t2h,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 01196ec1-9fa0-47a9-813e-64cd0afae7da,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c876f3d6402bd3090160248e2e7a745c32eb7f2c70463e5b2183aee03dc9e785,PodSandboxId:bb9c50c2e335b21fdaaae95166fe59e183555481160e0c1bcd0edf2859d4be8c,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annota
tions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1734377778678659314,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-t9xls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 998af96b-a6d5-438c-8ffb-97b11028796f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:273479a02f4bcce8b7f05f4909cde609eea1d516ef32336b7540d277526e2f1f,PodSandboxId:d7150f67485f635bcb7abfa2265b4812cbfe6f2ee89f64c3ce47c57931e0492e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d46
0978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1734377763574667739,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 913a8e1d-d56f-4b34-89b0-afa60ef45d1a,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78577479cd15e2342c6dd4796ceb214c0d74c7572c9ae96444bdb9b7e2cba77d,PodSandboxId:c454f29e8eda3f10e01cde053db6649aaf970515a4153f172acb6773cbc41242,Metadata:&ContainerMetad
ata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734377751968521223,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8df30b29-628b-40a9-85a1-0a2edb5357ab,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74ac4a483ca6e2d67d6a49be154343cc60031a6dd0b78bc7e90fd9f07bfe3db1,PodSandboxId:7fc5e2399f4850db7633adb765e5e9b8ae49a564044e08e1b9afd42ede84e911,Metadata:&ContainerMetadata{Name:cor
edns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734377750706940648,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-jqhz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d168f2c-2593-4ee9-a909-ced7e32adca5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},
},&Container{Id:914213cd5da43b2e02f7ae5ef1ad795000630d45fff565905982bef7c39857ba,PodSandboxId:44cbb4acdf41d8e154dff11cf0fd9ae2796c4d94720b2bf81fb095cbb19a7b6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1734377747175738840,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8t666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 397ca8ee-6184-4c67-9cc2-df6a118f9ec7,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6184416c7f2245b19
769b6b6c18aff4e9bfdab07bd818fd77cc33aff5bfa0eab,PodSandboxId:9b376e0f6943772f59f81c70e4b5efdf5563397deaff1bddee775af1780d9ef3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1734377735981567898,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-618388,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c652e97277b9a4e1265beba344d8e0db,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15085178c7262da01fec9432c0fc231cb4
3bf620ae6c2ccefc3eb2a726807c4a,PodSandboxId:6f85776c42f4225819f1c1cdc8ac0e8f3a93daca7328eb2f9deec4972480fc0b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1734377735975328343,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-618388,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc4813f902c974c98326634283e67497,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a57ad54df62563b286ad667
2f38fdde8d7b769e145a520d7f2b05cedfb36e53,PodSandboxId:2a85ba66f0bb266d60865dc63bd2d06a4a3d0527a7f4709965b125b0297a51e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1734377735966808524,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-618388,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cfdf638021c8a1520d724d230dfdd84,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e31ff64b1d64f3f9a8478142fdaf8ce5bd23879d6318582530651cb287bcd456,PodSan
dboxId:c239d7fd5353d698f1c91c0cd335d3511ef509bed97a7402812a366f79448486,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1734377735959166165,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-618388,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7ed9c41ae63edd6868abd3c5d53735,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=60b5d690-be24-4b3f-ab44-c4bfe7d89ebd name=/runtime.v1
.RuntimeService/ListContainers
	Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.736565916Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7aff5b67-95ce-47ef-9b5d-79489a2bb912 name=/runtime.v1.RuntimeService/Version
	Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.736661960Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7aff5b67-95ce-47ef-9b5d-79489a2bb912 name=/runtime.v1.RuntimeService/Version
	Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.737869833Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=127e0b07-5f4b-4255-80f9-9f6240521ea0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.739554265Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734378011739519843,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595926,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=127e0b07-5f4b-4255-80f9-9f6240521ea0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.740287400Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dcfefa24-57c5-4965-9241-6383fa013f00 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.740408857Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dcfefa24-57c5-4965-9241-6383fa013f00 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.740899325Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a83ce5fc3bf5d5eed034bb5e58b580fb6e83c8c250e63aa9e018497aec331259,PodSandboxId:012cfbaf366c7682320ff9b20e114008fbc8a2f619c0f2fb59d052d9c3dbab82,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1734377873939416092,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 004073d4-980e-4fd9-ad94-dc4598f84218,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:373b7d1d5c5a8decd5d0fede1509e853236e98f53519a70b5d20098e800239f5,PodSandboxId:b9d05acaaff9aa586fc1e7693ef04f13cb1b3c7d5b94c8d90ef5ca226eec4d83,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1734377835495900239,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 12fe933d-2f3a-4b23-9e9d-2faa73db353b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b941a1e632cbc8f4a8a5493e67bf24cccf3be8fccf534e7fd10e567f414c58,PodSandboxId:3b7647453926fa4968304d00504647f8d88639d93cd6984c9cb917815d2d59a6,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1734377826834387420,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-rtb85,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 918046a9-d03b-4717-8083-f1055bb8fa1e,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0389c8ef56a302abbd165e6d8c2aba1a54ed92bebb9e68df36c196e48f70b39a,PodSandboxId:ae114d6dcd6a359f99ccef2a0284d2896b6901240f689bb95babf1ae940d0ae9,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1734377812564024360,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-lsm5p,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 29c0b712-49ac-4316-b9d3-f602609b2309,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd2e36bd4d4598dd05712cdd0088e0da4a6a77814baac4e4b508a8fb57c5f9c4,PodSandboxId:784dd69c8e3614ccd13d710bd822fd1bc48e42ecaa5d7e4d0d5e1dfab67dd2f5,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1734377812020186227,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sgp7s,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 927e60ad-2b4f-4cb6-9f3d-fdc73a5b0b8d,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eac4f297a677bfe4a8426be0287541519521ff1fa4c953c559bb2a61cffe7c51,PodSandboxId:5347fbb80356ab6d733eb639850a88c731f4baa8dc7b8d431ca720fbd038b347,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1734377804295686677,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-f8t2h,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 01196ec1-9fa0-47a9-813e-64cd0afae7da,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c876f3d6402bd3090160248e2e7a745c32eb7f2c70463e5b2183aee03dc9e785,PodSandboxId:bb9c50c2e335b21fdaaae95166fe59e183555481160e0c1bcd0edf2859d4be8c,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annota
tions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1734377778678659314,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-t9xls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 998af96b-a6d5-438c-8ffb-97b11028796f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:273479a02f4bcce8b7f05f4909cde609eea1d516ef32336b7540d277526e2f1f,PodSandboxId:d7150f67485f635bcb7abfa2265b4812cbfe6f2ee89f64c3ce47c57931e0492e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d46
0978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1734377763574667739,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 913a8e1d-d56f-4b34-89b0-afa60ef45d1a,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78577479cd15e2342c6dd4796ceb214c0d74c7572c9ae96444bdb9b7e2cba77d,PodSandboxId:c454f29e8eda3f10e01cde053db6649aaf970515a4153f172acb6773cbc41242,Metadata:&ContainerMetad
ata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734377751968521223,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8df30b29-628b-40a9-85a1-0a2edb5357ab,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74ac4a483ca6e2d67d6a49be154343cc60031a6dd0b78bc7e90fd9f07bfe3db1,PodSandboxId:7fc5e2399f4850db7633adb765e5e9b8ae49a564044e08e1b9afd42ede84e911,Metadata:&ContainerMetadata{Name:cor
edns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734377750706940648,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-jqhz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d168f2c-2593-4ee9-a909-ced7e32adca5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},
},&Container{Id:914213cd5da43b2e02f7ae5ef1ad795000630d45fff565905982bef7c39857ba,PodSandboxId:44cbb4acdf41d8e154dff11cf0fd9ae2796c4d94720b2bf81fb095cbb19a7b6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1734377747175738840,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8t666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 397ca8ee-6184-4c67-9cc2-df6a118f9ec7,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6184416c7f2245b19
769b6b6c18aff4e9bfdab07bd818fd77cc33aff5bfa0eab,PodSandboxId:9b376e0f6943772f59f81c70e4b5efdf5563397deaff1bddee775af1780d9ef3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1734377735981567898,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-618388,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c652e97277b9a4e1265beba344d8e0db,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15085178c7262da01fec9432c0fc231cb4
3bf620ae6c2ccefc3eb2a726807c4a,PodSandboxId:6f85776c42f4225819f1c1cdc8ac0e8f3a93daca7328eb2f9deec4972480fc0b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1734377735975328343,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-618388,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc4813f902c974c98326634283e67497,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a57ad54df62563b286ad667
2f38fdde8d7b769e145a520d7f2b05cedfb36e53,PodSandboxId:2a85ba66f0bb266d60865dc63bd2d06a4a3d0527a7f4709965b125b0297a51e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1734377735966808524,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-618388,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cfdf638021c8a1520d724d230dfdd84,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e31ff64b1d64f3f9a8478142fdaf8ce5bd23879d6318582530651cb287bcd456,PodSan
dboxId:c239d7fd5353d698f1c91c0cd335d3511ef509bed97a7402812a366f79448486,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1734377735959166165,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-618388,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7ed9c41ae63edd6868abd3c5d53735,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dcfefa24-57c5-4965-9241-6383fa013f00 name=/runtime.v1
.RuntimeService/ListContainers
	Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.758316468Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=3c1eec1b-0710-4ac4-974b-d17cdac220c5 name=/runtime.v1.RuntimeService/Status
	Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.758404678Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=3c1eec1b-0710-4ac4-974b-d17cdac220c5 name=/runtime.v1.RuntimeService/Status
	Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.759000145Z" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.list.v2+json\"" file="docker/docker_client.go:964"
	Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.760392222Z" level=debug msg="Using SQLite blob info cache at /var/lib/containers/cache/blob-info-cache-v1.sqlite" file="blobinfocache/default.go:74"
	Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.760834428Z" level=debug msg="Source is a manifest list; copying (only) instance sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 for current system" file="copy/copy.go:318"
	Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.760911949Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" file="docker/docker_client.go:631"
	Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.786909082Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1b0d1e0f-ef4f-4e97-80e7-8cd5c48da057 name=/runtime.v1.RuntimeService/Version
	Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.786999112Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1b0d1e0f-ef4f-4e97-80e7-8cd5c48da057 name=/runtime.v1.RuntimeService/Version
	Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.788639159Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=605e7d39-a8ae-4490-aaea-e8976b115ccc name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.790349792Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734378011790318087,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595926,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=605e7d39-a8ae-4490-aaea-e8976b115ccc name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.790920882Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4cf1d189-90e1-46b1-882f-646910919aef name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.790975806Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4cf1d189-90e1-46b1-882f-646910919aef name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.791310154Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a83ce5fc3bf5d5eed034bb5e58b580fb6e83c8c250e63aa9e018497aec331259,PodSandboxId:012cfbaf366c7682320ff9b20e114008fbc8a2f619c0f2fb59d052d9c3dbab82,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1734377873939416092,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 004073d4-980e-4fd9-ad94-dc4598f84218,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:373b7d1d5c5a8decd5d0fede1509e853236e98f53519a70b5d20098e800239f5,PodSandboxId:b9d05acaaff9aa586fc1e7693ef04f13cb1b3c7d5b94c8d90ef5ca226eec4d83,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1734377835495900239,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 12fe933d-2f3a-4b23-9e9d-2faa73db353b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b941a1e632cbc8f4a8a5493e67bf24cccf3be8fccf534e7fd10e567f414c58,PodSandboxId:3b7647453926fa4968304d00504647f8d88639d93cd6984c9cb917815d2d59a6,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1734377826834387420,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-rtb85,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 918046a9-d03b-4717-8083-f1055bb8fa1e,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0389c8ef56a302abbd165e6d8c2aba1a54ed92bebb9e68df36c196e48f70b39a,PodSandboxId:ae114d6dcd6a359f99ccef2a0284d2896b6901240f689bb95babf1ae940d0ae9,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1734377812564024360,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-lsm5p,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 29c0b712-49ac-4316-b9d3-f602609b2309,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd2e36bd4d4598dd05712cdd0088e0da4a6a77814baac4e4b508a8fb57c5f9c4,PodSandboxId:784dd69c8e3614ccd13d710bd822fd1bc48e42ecaa5d7e4d0d5e1dfab67dd2f5,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1734377812020186227,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sgp7s,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 927e60ad-2b4f-4cb6-9f3d-fdc73a5b0b8d,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eac4f297a677bfe4a8426be0287541519521ff1fa4c953c559bb2a61cffe7c51,PodSandboxId:5347fbb80356ab6d733eb639850a88c731f4baa8dc7b8d431ca720fbd038b347,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1734377804295686677,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-f8t2h,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 01196ec1-9fa0-47a9-813e-64cd0afae7da,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c876f3d6402bd3090160248e2e7a745c32eb7f2c70463e5b2183aee03dc9e785,PodSandboxId:bb9c50c2e335b21fdaaae95166fe59e183555481160e0c1bcd0edf2859d4be8c,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annota
tions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1734377778678659314,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-t9xls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 998af96b-a6d5-438c-8ffb-97b11028796f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:273479a02f4bcce8b7f05f4909cde609eea1d516ef32336b7540d277526e2f1f,PodSandboxId:d7150f67485f635bcb7abfa2265b4812cbfe6f2ee89f64c3ce47c57931e0492e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d46
0978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1734377763574667739,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 913a8e1d-d56f-4b34-89b0-afa60ef45d1a,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78577479cd15e2342c6dd4796ceb214c0d74c7572c9ae96444bdb9b7e2cba77d,PodSandboxId:c454f29e8eda3f10e01cde053db6649aaf970515a4153f172acb6773cbc41242,Metadata:&ContainerMetad
ata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734377751968521223,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8df30b29-628b-40a9-85a1-0a2edb5357ab,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74ac4a483ca6e2d67d6a49be154343cc60031a6dd0b78bc7e90fd9f07bfe3db1,PodSandboxId:7fc5e2399f4850db7633adb765e5e9b8ae49a564044e08e1b9afd42ede84e911,Metadata:&ContainerMetadata{Name:cor
edns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734377750706940648,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-jqhz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d168f2c-2593-4ee9-a909-ced7e32adca5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},
},&Container{Id:914213cd5da43b2e02f7ae5ef1ad795000630d45fff565905982bef7c39857ba,PodSandboxId:44cbb4acdf41d8e154dff11cf0fd9ae2796c4d94720b2bf81fb095cbb19a7b6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1734377747175738840,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8t666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 397ca8ee-6184-4c67-9cc2-df6a118f9ec7,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6184416c7f2245b19
769b6b6c18aff4e9bfdab07bd818fd77cc33aff5bfa0eab,PodSandboxId:9b376e0f6943772f59f81c70e4b5efdf5563397deaff1bddee775af1780d9ef3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1734377735981567898,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-618388,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c652e97277b9a4e1265beba344d8e0db,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15085178c7262da01fec9432c0fc231cb4
3bf620ae6c2ccefc3eb2a726807c4a,PodSandboxId:6f85776c42f4225819f1c1cdc8ac0e8f3a93daca7328eb2f9deec4972480fc0b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1734377735975328343,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-618388,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc4813f902c974c98326634283e67497,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a57ad54df62563b286ad667
2f38fdde8d7b769e145a520d7f2b05cedfb36e53,PodSandboxId:2a85ba66f0bb266d60865dc63bd2d06a4a3d0527a7f4709965b125b0297a51e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1734377735966808524,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-618388,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cfdf638021c8a1520d724d230dfdd84,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e31ff64b1d64f3f9a8478142fdaf8ce5bd23879d6318582530651cb287bcd456,PodSan
dboxId:c239d7fd5353d698f1c91c0cd335d3511ef509bed97a7402812a366f79448486,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1734377735959166165,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-618388,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7ed9c41ae63edd6868abd3c5d53735,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4cf1d189-90e1-46b1-882f-646910919aef name=/runtime.v1
.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a83ce5fc3bf5d       docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4                              2 minutes ago       Running             nginx                     0                   012cfbaf366c7       nginx
	373b7d1d5c5a8       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   b9d05acaaff9a       busybox
	15b941a1e632c       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   3b7647453926f       ingress-nginx-controller-56d7c84fd4-rtb85
	0389c8ef56a30       a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb                                                             3 minutes ago       Exited              patch                     1                   ae114d6dcd6a3       ingress-nginx-admission-patch-lsm5p
	dd2e36bd4d459       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              create                    0                   784dd69c8e361       ingress-nginx-admission-create-sgp7s
	eac4f297a677b       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             3 minutes ago       Running             local-path-provisioner    0                   5347fbb80356a       local-path-provisioner-76f89f99b5-f8t2h
	c876f3d6402bd       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     3 minutes ago       Running             amd-gpu-device-plugin     0                   bb9c50c2e335b       amd-gpu-device-plugin-t9xls
	273479a02f4bc       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             4 minutes ago       Running             minikube-ingress-dns      0                   d7150f67485f6       kube-ingress-dns-minikube
	78577479cd15e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   c454f29e8eda3       storage-provisioner
	74ac4a483ca6e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             4 minutes ago       Running             coredns                   0                   7fc5e2399f485       coredns-668d6bf9bc-jqhz4
	914213cd5da43       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08                                                             4 minutes ago       Running             kube-proxy                0                   44cbb4acdf41d       kube-proxy-8t666
	6184416c7f224       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5                                                             4 minutes ago       Running             kube-scheduler            0                   9b376e0f69437       kube-scheduler-addons-618388
	15085178c7262       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3                                                             4 minutes ago       Running             kube-controller-manager   0                   6f85776c42f42       kube-controller-manager-addons-618388
	3a57ad54df625       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                             4 minutes ago       Running             etcd                      0                   2a85ba66f0bb2       etcd-addons-618388
	e31ff64b1d64f       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4                                                             4 minutes ago       Running             kube-apiserver            0                   c239d7fd5353d       kube-apiserver-addons-618388
	
	
	==> coredns [74ac4a483ca6e2d67d6a49be154343cc60031a6dd0b78bc7e90fd9f07bfe3db1] <==
	[INFO] 10.244.0.7:54220 - 2731 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000299477s
	[INFO] 10.244.0.7:54220 - 20337 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000130969s
	[INFO] 10.244.0.7:54220 - 4360 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000067619s
	[INFO] 10.244.0.7:54220 - 61535 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000138201s
	[INFO] 10.244.0.7:54220 - 8374 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000300711s
	[INFO] 10.244.0.7:54220 - 54924 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000413344s
	[INFO] 10.244.0.7:54220 - 42136 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000087175s
	[INFO] 10.244.0.7:44681 - 23139 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000149048s
	[INFO] 10.244.0.7:44681 - 22873 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000410459s
	[INFO] 10.244.0.7:38297 - 42266 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000109641s
	[INFO] 10.244.0.7:38297 - 41994 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00022381s
	[INFO] 10.244.0.7:50387 - 21665 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00009814s
	[INFO] 10.244.0.7:50387 - 21422 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000213167s
	[INFO] 10.244.0.7:38373 - 52908 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000079805s
	[INFO] 10.244.0.7:38373 - 53086 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000217665s
	[INFO] 10.244.0.23:39422 - 915 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00068431s
	[INFO] 10.244.0.23:60480 - 39142 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000154041s
	[INFO] 10.244.0.23:42727 - 48733 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000180931s
	[INFO] 10.244.0.23:56814 - 26277 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000100708s
	[INFO] 10.244.0.23:47272 - 58386 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000119271s
	[INFO] 10.244.0.23:52013 - 18859 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000068854s
	[INFO] 10.244.0.23:44785 - 22856 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003028236s
	[INFO] 10.244.0.23:38917 - 36499 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.004353333s
	[INFO] 10.244.0.27:49189 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000686921s
	[INFO] 10.244.0.27:56941 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000141104s
	
	
	==> describe nodes <==
	Name:               addons-618388
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-618388
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=74e51ab701402ddc00f8ba70f2a2775c7dcd6477
	                    minikube.k8s.io/name=addons-618388
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_16T19_35_41_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-618388
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Dec 2024 19:35:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-618388
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Dec 2024 19:40:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Dec 2024 19:38:23 +0000   Mon, 16 Dec 2024 19:35:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Dec 2024 19:38:23 +0000   Mon, 16 Dec 2024 19:35:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Dec 2024 19:38:23 +0000   Mon, 16 Dec 2024 19:35:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Dec 2024 19:38:23 +0000   Mon, 16 Dec 2024 19:35:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.82
	  Hostname:    addons-618388
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 587b661faea140c3b5b4e0025416a25f
	  System UUID:                587b661f-aea1-40c3-b5b4-e0025416a25f
	  Boot ID:                    5a26730d-8cc2-4b49-afb3-fcb48f5f35dd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m59s
	  default                     hello-world-app-7d9564db4-pbr29              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-rtb85    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m18s
	  kube-system                 amd-gpu-device-plugin-t9xls                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m25s
	  kube-system                 coredns-668d6bf9bc-jqhz4                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m27s
	  kube-system                 etcd-addons-618388                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m33s
	  kube-system                 kube-apiserver-addons-618388                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m33s
	  kube-system                 kube-controller-manager-addons-618388        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 kube-proxy-8t666                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 kube-scheduler-addons-618388                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	  local-path-storage          local-path-provisioner-76f89f99b5-f8t2h      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m23s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m37s (x8 over 4m37s)  kubelet          Node addons-618388 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m37s (x8 over 4m37s)  kubelet          Node addons-618388 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m37s (x7 over 4m37s)  kubelet          Node addons-618388 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m31s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m31s                  kubelet          Node addons-618388 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m31s                  kubelet          Node addons-618388 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m31s                  kubelet          Node addons-618388 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m30s                  kubelet          Node addons-618388 status is now: NodeReady
	  Normal  RegisteredNode           4m28s                  node-controller  Node addons-618388 event: Registered Node addons-618388 in Controller
	
	
	==> dmesg <==
	[  +5.180632] systemd-fstab-generator[1219]: Ignoring "noauto" option for root device
	[  +0.080147] kauditd_printk_skb: 30 callbacks suppressed
	[  +4.277520] systemd-fstab-generator[1347]: Ignoring "noauto" option for root device
	[  +1.152523] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.065925] kauditd_printk_skb: 110 callbacks suppressed
	[  +5.014777] kauditd_printk_skb: 83 callbacks suppressed
	[Dec16 19:36] kauditd_printk_skb: 117 callbacks suppressed
	[ +25.659825] kauditd_printk_skb: 11 callbacks suppressed
	[ +11.159570] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.745566] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.281036] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.306466] kauditd_printk_skb: 56 callbacks suppressed
	[Dec16 19:37] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.698926] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.629396] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.536843] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.138200] kauditd_printk_skb: 4 callbacks suppressed
	[ +11.879574] kauditd_printk_skb: 6 callbacks suppressed
	[  +8.223527] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.100936] kauditd_printk_skb: 42 callbacks suppressed
	[  +5.282447] kauditd_printk_skb: 36 callbacks suppressed
	[Dec16 19:38] kauditd_printk_skb: 29 callbacks suppressed
	[  +6.293016] kauditd_printk_skb: 20 callbacks suppressed
	[  +8.353598] kauditd_printk_skb: 40 callbacks suppressed
	[Dec16 19:40] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [3a57ad54df62563b286ad6672f38fdde8d7b769e145a520d7f2b05cedfb36e53] <==
	{"level":"info","ts":"2024-12-16T19:37:06.038068Z","caller":"traceutil/trace.go:171","msg":"trace[2119050398] transaction","detail":"{read_only:false; response_revision:1118; number_of_response:1; }","duration":"120.978461ms","start":"2024-12-16T19:37:05.917082Z","end":"2024-12-16T19:37:06.038060Z","steps":["trace[2119050398] 'process raft request'  (duration: 120.375079ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-16T19:37:10.081927Z","caller":"traceutil/trace.go:171","msg":"trace[1767719427] linearizableReadLoop","detail":"{readStateIndex:1171; appliedIndex:1170; }","duration":"159.317157ms","start":"2024-12-16T19:37:09.922596Z","end":"2024-12-16T19:37:10.081913Z","steps":["trace[1767719427] 'read index received'  (duration: 159.050824ms)","trace[1767719427] 'applied index is now lower than readState.Index'  (duration: 265.884µs)"],"step_count":2}
	{"level":"info","ts":"2024-12-16T19:37:10.082035Z","caller":"traceutil/trace.go:171","msg":"trace[790173965] transaction","detail":"{read_only:false; response_revision:1139; number_of_response:1; }","duration":"328.383251ms","start":"2024-12-16T19:37:09.753644Z","end":"2024-12-16T19:37:10.082027Z","steps":["trace[790173965] 'process raft request'  (duration: 328.042984ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T19:37:10.082115Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-16T19:37:09.753629Z","time spent":"328.424342ms","remote":"127.0.0.1:59182","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":539,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1117 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:452 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"warn","ts":"2024-12-16T19:37:10.082396Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.798616ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-16T19:37:10.082437Z","caller":"traceutil/trace.go:171","msg":"trace[660412259] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1139; }","duration":"159.841945ms","start":"2024-12-16T19:37:09.922588Z","end":"2024-12-16T19:37:10.082430Z","steps":["trace[660412259] 'agreement among raft nodes before linearized reading'  (duration: 159.789168ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-16T19:37:41.688895Z","caller":"traceutil/trace.go:171","msg":"trace[952250791] linearizableReadLoop","detail":"{readStateIndex:1407; appliedIndex:1406; }","duration":"240.92878ms","start":"2024-12-16T19:37:41.447941Z","end":"2024-12-16T19:37:41.688870Z","steps":["trace[952250791] 'read index received'  (duration: 240.723036ms)","trace[952250791] 'applied index is now lower than readState.Index'  (duration: 205.31µs)"],"step_count":2}
	{"level":"info","ts":"2024-12-16T19:37:41.689095Z","caller":"traceutil/trace.go:171","msg":"trace[1889386926] transaction","detail":"{read_only:false; response_revision:1361; number_of_response:1; }","duration":"247.556638ms","start":"2024-12-16T19:37:41.441529Z","end":"2024-12-16T19:37:41.689086Z","steps":["trace[1889386926] 'process raft request'  (duration: 247.147493ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T19:37:41.689350Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"241.385942ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-12-16T19:37:41.689378Z","caller":"traceutil/trace.go:171","msg":"trace[1482468366] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1361; }","duration":"241.457752ms","start":"2024-12-16T19:37:41.447914Z","end":"2024-12-16T19:37:41.689372Z","steps":["trace[1482468366] 'agreement among raft nodes before linearized reading'  (duration: 241.365968ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-16T19:37:41.815753Z","caller":"traceutil/trace.go:171","msg":"trace[1068199803] linearizableReadLoop","detail":"{readStateIndex:1408; appliedIndex:1407; }","duration":"112.742781ms","start":"2024-12-16T19:37:41.702994Z","end":"2024-12-16T19:37:41.815737Z","steps":["trace[1068199803] 'read index received'  (duration: 111.526959ms)","trace[1068199803] 'applied index is now lower than readState.Index'  (duration: 1.215222ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-16T19:37:41.815980Z","caller":"traceutil/trace.go:171","msg":"trace[1142545525] transaction","detail":"{read_only:false; response_revision:1362; number_of_response:1; }","duration":"117.734669ms","start":"2024-12-16T19:37:41.698236Z","end":"2024-12-16T19:37:41.815971Z","steps":["trace[1142545525] 'process raft request'  (duration: 116.326442ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T19:37:41.816173Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.162779ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-16T19:37:41.816196Z","caller":"traceutil/trace.go:171","msg":"trace[1977513151] range","detail":"{range_begin:/registry/secrets; range_end:; response_count:0; response_revision:1362; }","duration":"113.217626ms","start":"2024-12-16T19:37:41.702971Z","end":"2024-12-16T19:37:41.816189Z","steps":["trace[1977513151] 'agreement among raft nodes before linearized reading'  (duration: 113.163549ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T19:37:41.816283Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.291883ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-16T19:37:41.816295Z","caller":"traceutil/trace.go:171","msg":"trace[190069588] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1362; }","duration":"104.32601ms","start":"2024-12-16T19:37:41.711965Z","end":"2024-12-16T19:37:41.816291Z","steps":["trace[190069588] 'agreement among raft nodes before linearized reading'  (duration: 104.302311ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-16T19:37:44.588191Z","caller":"traceutil/trace.go:171","msg":"trace[252821531] transaction","detail":"{read_only:false; response_revision:1379; number_of_response:1; }","duration":"260.43171ms","start":"2024-12-16T19:37:44.327742Z","end":"2024-12-16T19:37:44.588174Z","steps":["trace[252821531] 'process raft request'  (duration: 259.918301ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T19:37:49.566066Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.963537ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/registry-proxy\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-16T19:37:49.566115Z","caller":"traceutil/trace.go:171","msg":"trace[391387733] range","detail":"{range_begin:/registry/daemonsets/kube-system/registry-proxy; range_end:; response_count:0; response_revision:1446; }","duration":"184.063025ms","start":"2024-12-16T19:37:49.382042Z","end":"2024-12-16T19:37:49.566105Z","steps":["trace[391387733] 'range keys from in-memory index tree'  (duration: 183.932072ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T19:37:49.566263Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"184.350139ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/registry-proxy-49ln5\" limit:1 ","response":"range_response_count:1 size:4039"}
	{"level":"info","ts":"2024-12-16T19:37:49.566278Z","caller":"traceutil/trace.go:171","msg":"trace[1060044770] range","detail":"{range_begin:/registry/pods/kube-system/registry-proxy-49ln5; range_end:; response_count:1; response_revision:1446; }","duration":"184.442363ms","start":"2024-12-16T19:37:49.381831Z","end":"2024-12-16T19:37:49.566274Z","steps":["trace[1060044770] 'range keys from in-memory index tree'  (duration: 184.093609ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T19:37:49.566474Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.832845ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregated-metrics-reader\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-16T19:37:49.566496Z","caller":"traceutil/trace.go:171","msg":"trace[855124741] range","detail":"{range_begin:/registry/clusterroles/system:aggregated-metrics-reader; range_end:; response_count:0; response_revision:1446; }","duration":"183.939334ms","start":"2024-12-16T19:37:49.382551Z","end":"2024-12-16T19:37:49.566490Z","steps":["trace[855124741] 'range keys from in-memory index tree'  (duration: 183.792308ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T19:37:49.567339Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.550277ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2606902285041163582 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/registry-proxy-49ln5.1811bf7baaa05d24\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/registry-proxy-49ln5.1811bf7baaa05d24\" value_size:651 lease:2606902285041163266 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-12-16T19:37:49.567419Z","caller":"traceutil/trace.go:171","msg":"trace[1325591466] transaction","detail":"{read_only:false; response_revision:1447; number_of_response:1; }","duration":"184.61541ms","start":"2024-12-16T19:37:49.382794Z","end":"2024-12-16T19:37:49.567410Z","steps":["trace[1325591466] 'process raft request'  (duration: 15.732254ms)","trace[1325591466] 'compare'  (duration: 167.163416ms)"],"step_count":2}
	
	
	==> kernel <==
	 19:40:12 up 5 min,  0 users,  load average: 0.55, 1.15, 0.61
	Linux addons-618388 5.10.207 #1 SMP Thu Dec 12 23:38:00 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [e31ff64b1d64f3f9a8478142fdaf8ce5bd23879d6318582530651cb287bcd456] <==
	I1216 19:36:29.303274       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1216 19:36:29.320633       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1216 19:37:20.996203       1 conn.go:339] Error on socket receive: read tcp 192.168.39.82:8443->192.168.39.1:56722: use of closed network connection
	E1216 19:37:21.187221       1 conn.go:339] Error on socket receive: read tcp 192.168.39.82:8443->192.168.39.1:56746: use of closed network connection
	I1216 19:37:30.632686       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.15.9"}
	I1216 19:37:49.970356       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1216 19:37:50.154148       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.67.217"}
	I1216 19:37:53.177610       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1216 19:37:56.178071       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1216 19:37:57.311201       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1216 19:38:17.523843       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 19:38:17.523914       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1216 19:38:17.545884       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 19:38:17.545947       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1216 19:38:17.598547       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 19:38:17.598816       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1216 19:38:17.700095       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 19:38:17.700305       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1216 19:38:17.705832       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 19:38:17.705876       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1216 19:38:18.701261       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1216 19:38:18.706307       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1216 19:38:18.815039       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1216 19:38:30.293022       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1216 19:40:10.536303       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.10.195"}
	
	
	==> kube-controller-manager [15085178c7262da01fec9432c0fc231cb43bf620ae6c2ccefc3eb2a726807c4a] <==
	E1216 19:39:09.764522       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 19:39:35.451267       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E1216 19:39:35.452328       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W1216 19:39:35.453323       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 19:39:35.453354       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 19:39:37.570212       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E1216 19:39:37.571438       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W1216 19:39:37.572354       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 19:39:37.572435       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 19:39:44.463622       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E1216 19:39:44.464587       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W1216 19:39:44.465550       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 19:39:44.465598       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 19:40:05.447971       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E1216 19:40:05.449281       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W1216 19:40:05.450234       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 19:40:05.450316       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 19:40:10.064188       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E1216 19:40:10.065307       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W1216 19:40:10.066193       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 19:40:10.066226       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1216 19:40:10.363378       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="41.695368ms"
	I1216 19:40:10.387766       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="24.226465ms"
	I1216 19:40:10.406831       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="18.923445ms"
	I1216 19:40:10.407097       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="110.325µs"
	
	
	==> kube-proxy [914213cd5da43b2e02f7ae5ef1ad795000630d45fff565905982bef7c39857ba] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1216 19:35:48.357970       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1216 19:35:48.369141       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.82"]
	E1216 19:35:48.369207       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 19:35:48.483941       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1216 19:35:48.483971       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1216 19:35:48.483992       1 server_linux.go:170] "Using iptables Proxier"
	I1216 19:35:48.499561       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 19:35:48.499903       1 server.go:497] "Version info" version="v1.32.0"
	I1216 19:35:48.499916       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 19:35:48.506236       1 config.go:199] "Starting service config controller"
	I1216 19:35:48.506335       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1216 19:35:48.506419       1 config.go:105] "Starting endpoint slice config controller"
	I1216 19:35:48.506424       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1216 19:35:48.511294       1 config.go:329] "Starting node config controller"
	I1216 19:35:48.511320       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1216 19:35:48.608236       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1216 19:35:48.608274       1 shared_informer.go:320] Caches are synced for service config
	I1216 19:35:48.618826       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6184416c7f2245b19769b6b6c18aff4e9bfdab07bd818fd77cc33aff5bfa0eab] <==
	W1216 19:35:38.372443       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1216 19:35:38.372870       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 19:35:39.308036       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1216 19:35:39.308087       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 19:35:39.334838       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1216 19:35:39.335184       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 19:35:39.429411       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1216 19:35:39.429598       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 19:35:39.445579       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1216 19:35:39.445785       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 19:35:39.460982       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E1216 19:35:39.461118       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 19:35:39.547899       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1216 19:35:39.548066       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 19:35:39.551809       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1216 19:35:39.551899       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 19:35:39.569034       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1216 19:35:39.569222       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1216 19:35:39.589889       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1216 19:35:39.590008       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 19:35:39.708266       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1216 19:35:39.708490       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1216 19:35:39.721372       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1216 19:35:39.721877       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1216 19:35:42.369034       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 16 19:39:41 addons-618388 kubelet[1226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 16 19:39:41 addons-618388 kubelet[1226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 16 19:39:41 addons-618388 kubelet[1226]: E1216 19:39:41.605412    1226 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734377981604891630,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595926,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 19:39:41 addons-618388 kubelet[1226]: E1216 19:39:41.605452    1226 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734377981604891630,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595926,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 19:39:51 addons-618388 kubelet[1226]: E1216 19:39:51.608782    1226 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734377991608338970,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595926,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 19:39:51 addons-618388 kubelet[1226]: E1216 19:39:51.608948    1226 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734377991608338970,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595926,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 19:39:56 addons-618388 kubelet[1226]: I1216 19:39:56.288650    1226 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 19:40:01 addons-618388 kubelet[1226]: E1216 19:40:01.611662    1226 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734378001611183959,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595926,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 19:40:01 addons-618388 kubelet[1226]: E1216 19:40:01.611687    1226 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734378001611183959,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595926,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 19:40:10 addons-618388 kubelet[1226]: I1216 19:40:10.369553    1226 memory_manager.go:355] "RemoveStaleState removing state" podUID="7c08e8c6-a4d2-48d1-8641-fce068dbafa2" containerName="csi-resizer"
	Dec 16 19:40:10 addons-618388 kubelet[1226]: I1216 19:40:10.369584    1226 memory_manager.go:355] "RemoveStaleState removing state" podUID="c682dd96-c52d-4c59-8b61-6fb5e8f9027a" containerName="csi-external-health-monitor-controller"
	Dec 16 19:40:10 addons-618388 kubelet[1226]: I1216 19:40:10.369592    1226 memory_manager.go:355] "RemoveStaleState removing state" podUID="4a9bc6bd-7ed3-4b60-9f26-33fb55f94e9e" containerName="volume-snapshot-controller"
	Dec 16 19:40:10 addons-618388 kubelet[1226]: I1216 19:40:10.369597    1226 memory_manager.go:355] "RemoveStaleState removing state" podUID="c682dd96-c52d-4c59-8b61-6fb5e8f9027a" containerName="csi-snapshotter"
	Dec 16 19:40:10 addons-618388 kubelet[1226]: I1216 19:40:10.369603    1226 memory_manager.go:355] "RemoveStaleState removing state" podUID="c682dd96-c52d-4c59-8b61-6fb5e8f9027a" containerName="hostpath"
	Dec 16 19:40:10 addons-618388 kubelet[1226]: I1216 19:40:10.369607    1226 memory_manager.go:355] "RemoveStaleState removing state" podUID="c682dd96-c52d-4c59-8b61-6fb5e8f9027a" containerName="csi-provisioner"
	Dec 16 19:40:10 addons-618388 kubelet[1226]: I1216 19:40:10.369612    1226 memory_manager.go:355] "RemoveStaleState removing state" podUID="2af3ed28-f280-421a-941f-b1c7d9a7b143" containerName="helper-pod"
	Dec 16 19:40:10 addons-618388 kubelet[1226]: I1216 19:40:10.369618    1226 memory_manager.go:355] "RemoveStaleState removing state" podUID="c8817fea-96d6-4405-8c50-674c5e47b8c7" containerName="volume-snapshot-controller"
	Dec 16 19:40:10 addons-618388 kubelet[1226]: I1216 19:40:10.369623    1226 memory_manager.go:355] "RemoveStaleState removing state" podUID="c682dd96-c52d-4c59-8b61-6fb5e8f9027a" containerName="node-driver-registrar"
	Dec 16 19:40:10 addons-618388 kubelet[1226]: I1216 19:40:10.369628    1226 memory_manager.go:355] "RemoveStaleState removing state" podUID="c331e026-63d4-4f10-a0c5-3bf7d22b1740" containerName="task-pv-container"
	Dec 16 19:40:10 addons-618388 kubelet[1226]: I1216 19:40:10.369632    1226 memory_manager.go:355] "RemoveStaleState removing state" podUID="c682dd96-c52d-4c59-8b61-6fb5e8f9027a" containerName="liveness-probe"
	Dec 16 19:40:10 addons-618388 kubelet[1226]: I1216 19:40:10.369637    1226 memory_manager.go:355] "RemoveStaleState removing state" podUID="a6ff89b4-0d31-4e72-826a-12cf756c7e4c" containerName="csi-attacher"
	Dec 16 19:40:10 addons-618388 kubelet[1226]: I1216 19:40:10.458541    1226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhbv6\" (UniqueName: \"kubernetes.io/projected/3f8649c3-4648-465c-ab4d-19179cbee81d-kube-api-access-jhbv6\") pod \"hello-world-app-7d9564db4-pbr29\" (UID: \"3f8649c3-4648-465c-ab4d-19179cbee81d\") " pod="default/hello-world-app-7d9564db4-pbr29"
	Dec 16 19:40:11 addons-618388 kubelet[1226]: E1216 19:40:11.615842    1226 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734378011615240313,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595926,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 19:40:11 addons-618388 kubelet[1226]: E1216 19:40:11.615869    1226 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734378011615240313,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595926,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 19:40:12 addons-618388 kubelet[1226]: I1216 19:40:12.288174    1226 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-t9xls" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [78577479cd15e2342c6dd4796ceb214c0d74c7572c9ae96444bdb9b7e2cba77d] <==
	I1216 19:35:53.221430       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 19:35:53.277565       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 19:35:53.283325       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1216 19:35:53.396537       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 19:35:53.396686       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-618388_bfcdc5f3-9c79-4ede-86c9-457166d105fe!
	I1216 19:35:53.397653       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e581e24f-4a34-4429-b10b-04c523c86f00", APIVersion:"v1", ResourceVersion:"642", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-618388_bfcdc5f3-9c79-4ede-86c9-457166d105fe became leader
	I1216 19:35:53.524028       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-618388_bfcdc5f3-9c79-4ede-86c9-457166d105fe!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-618388 -n addons-618388
helpers_test.go:261: (dbg) Run:  kubectl --context addons-618388 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-7d9564db4-pbr29 ingress-nginx-admission-create-sgp7s ingress-nginx-admission-patch-lsm5p
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-618388 describe pod hello-world-app-7d9564db4-pbr29 ingress-nginx-admission-create-sgp7s ingress-nginx-admission-patch-lsm5p
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-618388 describe pod hello-world-app-7d9564db4-pbr29 ingress-nginx-admission-create-sgp7s ingress-nginx-admission-patch-lsm5p: exit status 1 (68.739624ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-7d9564db4-pbr29
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-618388/192.168.39.82
	Start Time:       Mon, 16 Dec 2024 19:40:10 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=7d9564db4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-7d9564db4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jhbv6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-jhbv6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-7d9564db4-pbr29 to addons-618388
	  Normal  Pulling    3s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	  Normal  Pulled     1s    kubelet            Successfully pulled image "docker.io/kicbase/echo-server:1.0" in 1.423s (1.423s including waiting). Image size: 4944818 bytes.
	  Normal  Created    1s    kubelet            Created container: hello-world-app
	  Normal  Started    1s    kubelet            Started container hello-world-app

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-sgp7s" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-lsm5p" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-618388 describe pod hello-world-app-7d9564db4-pbr29 ingress-nginx-admission-create-sgp7s ingress-nginx-admission-patch-lsm5p: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-618388 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-618388 addons disable ingress-dns --alsologtostderr -v=1: (1.218647901s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-618388 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-618388 addons disable ingress --alsologtostderr -v=1: (7.758837783s)
--- FAIL: TestAddons/parallel/Ingress (152.48s)

                                                
                                    
x
+
TestPreload (285.31s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-817668 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E1216 20:37:13.885870   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-817668 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m11.019215193s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-817668 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-817668 image pull gcr.io/k8s-minikube/busybox: (1.541382317s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-817668
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-817668: (1m30.793336801s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-817668 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-817668 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (58.781600391s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-817668 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2024-12-16 20:40:41.372146291 +0000 UTC m=+3959.845794251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-817668 -n test-preload-817668
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-817668 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-817668 logs -n 25: (1.145331282s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-228964 ssh -n                                                                 | multinode-228964     | jenkins | v1.34.0 | 16 Dec 24 20:22 UTC | 16 Dec 24 20:22 UTC |
	|         | multinode-228964-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-228964 ssh -n multinode-228964 sudo cat                                       | multinode-228964     | jenkins | v1.34.0 | 16 Dec 24 20:22 UTC | 16 Dec 24 20:22 UTC |
	|         | /home/docker/cp-test_multinode-228964-m03_multinode-228964.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-228964 cp multinode-228964-m03:/home/docker/cp-test.txt                       | multinode-228964     | jenkins | v1.34.0 | 16 Dec 24 20:22 UTC | 16 Dec 24 20:22 UTC |
	|         | multinode-228964-m02:/home/docker/cp-test_multinode-228964-m03_multinode-228964-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-228964 ssh -n                                                                 | multinode-228964     | jenkins | v1.34.0 | 16 Dec 24 20:22 UTC | 16 Dec 24 20:22 UTC |
	|         | multinode-228964-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-228964 ssh -n multinode-228964-m02 sudo cat                                   | multinode-228964     | jenkins | v1.34.0 | 16 Dec 24 20:22 UTC | 16 Dec 24 20:22 UTC |
	|         | /home/docker/cp-test_multinode-228964-m03_multinode-228964-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-228964 node stop m03                                                          | multinode-228964     | jenkins | v1.34.0 | 16 Dec 24 20:22 UTC | 16 Dec 24 20:22 UTC |
	| node    | multinode-228964 node start                                                             | multinode-228964     | jenkins | v1.34.0 | 16 Dec 24 20:22 UTC | 16 Dec 24 20:23 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-228964                                                                | multinode-228964     | jenkins | v1.34.0 | 16 Dec 24 20:23 UTC |                     |
	| stop    | -p multinode-228964                                                                     | multinode-228964     | jenkins | v1.34.0 | 16 Dec 24 20:23 UTC | 16 Dec 24 20:26 UTC |
	| start   | -p multinode-228964                                                                     | multinode-228964     | jenkins | v1.34.0 | 16 Dec 24 20:26 UTC | 16 Dec 24 20:30 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-228964                                                                | multinode-228964     | jenkins | v1.34.0 | 16 Dec 24 20:30 UTC |                     |
	| node    | multinode-228964 node delete                                                            | multinode-228964     | jenkins | v1.34.0 | 16 Dec 24 20:30 UTC | 16 Dec 24 20:30 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-228964 stop                                                                   | multinode-228964     | jenkins | v1.34.0 | 16 Dec 24 20:30 UTC | 16 Dec 24 20:33 UTC |
	| start   | -p multinode-228964                                                                     | multinode-228964     | jenkins | v1.34.0 | 16 Dec 24 20:33 UTC | 16 Dec 24 20:35 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-228964                                                                | multinode-228964     | jenkins | v1.34.0 | 16 Dec 24 20:35 UTC |                     |
	| start   | -p multinode-228964-m02                                                                 | multinode-228964-m02 | jenkins | v1.34.0 | 16 Dec 24 20:35 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-228964-m03                                                                 | multinode-228964-m03 | jenkins | v1.34.0 | 16 Dec 24 20:35 UTC | 16 Dec 24 20:35 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-228964                                                                 | multinode-228964     | jenkins | v1.34.0 | 16 Dec 24 20:35 UTC |                     |
	| delete  | -p multinode-228964-m03                                                                 | multinode-228964-m03 | jenkins | v1.34.0 | 16 Dec 24 20:35 UTC | 16 Dec 24 20:35 UTC |
	| delete  | -p multinode-228964                                                                     | multinode-228964     | jenkins | v1.34.0 | 16 Dec 24 20:35 UTC | 16 Dec 24 20:35 UTC |
	| start   | -p test-preload-817668                                                                  | test-preload-817668  | jenkins | v1.34.0 | 16 Dec 24 20:35 UTC | 16 Dec 24 20:38 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-817668 image pull                                                          | test-preload-817668  | jenkins | v1.34.0 | 16 Dec 24 20:38 UTC | 16 Dec 24 20:38 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-817668                                                                  | test-preload-817668  | jenkins | v1.34.0 | 16 Dec 24 20:38 UTC | 16 Dec 24 20:39 UTC |
	| start   | -p test-preload-817668                                                                  | test-preload-817668  | jenkins | v1.34.0 | 16 Dec 24 20:39 UTC | 16 Dec 24 20:40 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-817668 image list                                                          | test-preload-817668  | jenkins | v1.34.0 | 16 Dec 24 20:40 UTC | 16 Dec 24 20:40 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 20:39:42
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 20:39:42.416139   47640 out.go:345] Setting OutFile to fd 1 ...
	I1216 20:39:42.416239   47640 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:39:42.416244   47640 out.go:358] Setting ErrFile to fd 2...
	I1216 20:39:42.416248   47640 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:39:42.416423   47640 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-7083/.minikube/bin
	I1216 20:39:42.416918   47640 out.go:352] Setting JSON to false
	I1216 20:39:42.417778   47640 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4927,"bootTime":1734376655,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 20:39:42.417873   47640 start.go:139] virtualization: kvm guest
	I1216 20:39:42.420420   47640 out.go:177] * [test-preload-817668] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1216 20:39:42.421849   47640 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 20:39:42.421847   47640 notify.go:220] Checking for updates...
	I1216 20:39:42.424336   47640 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 20:39:42.425699   47640 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 20:39:42.426967   47640 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-7083/.minikube
	I1216 20:39:42.428445   47640 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 20:39:42.429733   47640 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 20:39:42.431504   47640 config.go:182] Loaded profile config "test-preload-817668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1216 20:39:42.431966   47640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 20:39:42.432047   47640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:39:42.446691   47640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41831
	I1216 20:39:42.447080   47640 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:39:42.447738   47640 main.go:141] libmachine: Using API Version  1
	I1216 20:39:42.447761   47640 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:39:42.448078   47640 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:39:42.448225   47640 main.go:141] libmachine: (test-preload-817668) Calling .DriverName
	I1216 20:39:42.450024   47640 out.go:177] * Kubernetes 1.32.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.0
	I1216 20:39:42.451233   47640 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 20:39:42.451538   47640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 20:39:42.451573   47640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:39:42.466102   47640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36665
	I1216 20:39:42.466507   47640 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:39:42.466988   47640 main.go:141] libmachine: Using API Version  1
	I1216 20:39:42.467011   47640 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:39:42.467299   47640 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:39:42.467478   47640 main.go:141] libmachine: (test-preload-817668) Calling .DriverName
	I1216 20:39:42.502520   47640 out.go:177] * Using the kvm2 driver based on existing profile
	I1216 20:39:42.503770   47640 start.go:297] selected driver: kvm2
	I1216 20:39:42.503797   47640 start.go:901] validating driver "kvm2" against &{Name:test-preload-817668 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-817668 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 20:39:42.503900   47640 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 20:39:42.504583   47640 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 20:39:42.504665   47640 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20091-7083/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1216 20:39:42.519346   47640 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1216 20:39:42.519709   47640 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 20:39:42.519738   47640 cni.go:84] Creating CNI manager for ""
	I1216 20:39:42.519781   47640 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 20:39:42.519832   47640 start.go:340] cluster config:
	{Name:test-preload-817668 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-817668 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 20:39:42.519933   47640 iso.go:125] acquiring lock: {Name:mk60ed2ba7ed00047edacd09f4f6bf84214f0831 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 20:39:42.521706   47640 out.go:177] * Starting "test-preload-817668" primary control-plane node in "test-preload-817668" cluster
	I1216 20:39:42.522893   47640 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1216 20:39:42.551897   47640 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1216 20:39:42.551921   47640 cache.go:56] Caching tarball of preloaded images
	I1216 20:39:42.552082   47640 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1216 20:39:42.553919   47640 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I1216 20:39:42.555309   47640 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1216 20:39:42.586998   47640 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1216 20:39:45.612422   47640 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1216 20:39:45.612524   47640 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1216 20:39:46.453579   47640 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I1216 20:39:46.453711   47640 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/test-preload-817668/config.json ...
	I1216 20:39:46.453963   47640 start.go:360] acquireMachinesLock for test-preload-817668: {Name:mk014ce1133f8d018fee1f78c9c31a354da6dd77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 20:39:46.454029   47640 start.go:364] duration metric: took 43.563µs to acquireMachinesLock for "test-preload-817668"
	I1216 20:39:46.454060   47640 start.go:96] Skipping create...Using existing machine configuration
	I1216 20:39:46.454068   47640 fix.go:54] fixHost starting: 
	I1216 20:39:46.454362   47640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 20:39:46.454408   47640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:39:46.469217   47640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38811
	I1216 20:39:46.469638   47640 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:39:46.470119   47640 main.go:141] libmachine: Using API Version  1
	I1216 20:39:46.470148   47640 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:39:46.470465   47640 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:39:46.470652   47640 main.go:141] libmachine: (test-preload-817668) Calling .DriverName
	I1216 20:39:46.470820   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetState
	I1216 20:39:46.472470   47640 fix.go:112] recreateIfNeeded on test-preload-817668: state=Stopped err=<nil>
	I1216 20:39:46.472489   47640 main.go:141] libmachine: (test-preload-817668) Calling .DriverName
	W1216 20:39:46.472629   47640 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 20:39:46.474994   47640 out.go:177] * Restarting existing kvm2 VM for "test-preload-817668" ...
	I1216 20:39:46.476487   47640 main.go:141] libmachine: (test-preload-817668) Calling .Start
	I1216 20:39:46.476686   47640 main.go:141] libmachine: (test-preload-817668) starting domain...
	I1216 20:39:46.476709   47640 main.go:141] libmachine: (test-preload-817668) ensuring networks are active...
	I1216 20:39:46.477443   47640 main.go:141] libmachine: (test-preload-817668) Ensuring network default is active
	I1216 20:39:46.477789   47640 main.go:141] libmachine: (test-preload-817668) Ensuring network mk-test-preload-817668 is active
	I1216 20:39:46.478137   47640 main.go:141] libmachine: (test-preload-817668) getting domain XML...
	I1216 20:39:46.478840   47640 main.go:141] libmachine: (test-preload-817668) creating domain...
	I1216 20:39:47.689049   47640 main.go:141] libmachine: (test-preload-817668) waiting for IP...
	I1216 20:39:47.689819   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:39:47.690235   47640 main.go:141] libmachine: (test-preload-817668) DBG | unable to find current IP address of domain test-preload-817668 in network mk-test-preload-817668
	I1216 20:39:47.690302   47640 main.go:141] libmachine: (test-preload-817668) DBG | I1216 20:39:47.690214   47692 retry.go:31] will retry after 273.022099ms: waiting for domain to come up
	I1216 20:39:47.964818   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:39:47.965334   47640 main.go:141] libmachine: (test-preload-817668) DBG | unable to find current IP address of domain test-preload-817668 in network mk-test-preload-817668
	I1216 20:39:47.965364   47640 main.go:141] libmachine: (test-preload-817668) DBG | I1216 20:39:47.965297   47692 retry.go:31] will retry after 354.049333ms: waiting for domain to come up
	I1216 20:39:48.321000   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:39:48.321404   47640 main.go:141] libmachine: (test-preload-817668) DBG | unable to find current IP address of domain test-preload-817668 in network mk-test-preload-817668
	I1216 20:39:48.321457   47640 main.go:141] libmachine: (test-preload-817668) DBG | I1216 20:39:48.321380   47692 retry.go:31] will retry after 422.911945ms: waiting for domain to come up
	I1216 20:39:48.746100   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:39:48.746546   47640 main.go:141] libmachine: (test-preload-817668) DBG | unable to find current IP address of domain test-preload-817668 in network mk-test-preload-817668
	I1216 20:39:48.746591   47640 main.go:141] libmachine: (test-preload-817668) DBG | I1216 20:39:48.746512   47692 retry.go:31] will retry after 468.041045ms: waiting for domain to come up
	I1216 20:39:49.216182   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:39:49.216657   47640 main.go:141] libmachine: (test-preload-817668) DBG | unable to find current IP address of domain test-preload-817668 in network mk-test-preload-817668
	I1216 20:39:49.216685   47640 main.go:141] libmachine: (test-preload-817668) DBG | I1216 20:39:49.216609   47692 retry.go:31] will retry after 732.679939ms: waiting for domain to come up
	I1216 20:39:49.950678   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:39:49.951070   47640 main.go:141] libmachine: (test-preload-817668) DBG | unable to find current IP address of domain test-preload-817668 in network mk-test-preload-817668
	I1216 20:39:49.951102   47640 main.go:141] libmachine: (test-preload-817668) DBG | I1216 20:39:49.951055   47692 retry.go:31] will retry after 834.156669ms: waiting for domain to come up
	I1216 20:39:50.787069   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:39:50.787593   47640 main.go:141] libmachine: (test-preload-817668) DBG | unable to find current IP address of domain test-preload-817668 in network mk-test-preload-817668
	I1216 20:39:50.787638   47640 main.go:141] libmachine: (test-preload-817668) DBG | I1216 20:39:50.787556   47692 retry.go:31] will retry after 935.972689ms: waiting for domain to come up
	I1216 20:39:51.725232   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:39:51.725688   47640 main.go:141] libmachine: (test-preload-817668) DBG | unable to find current IP address of domain test-preload-817668 in network mk-test-preload-817668
	I1216 20:39:51.725713   47640 main.go:141] libmachine: (test-preload-817668) DBG | I1216 20:39:51.725657   47692 retry.go:31] will retry after 1.40367068s: waiting for domain to come up
	I1216 20:39:53.131414   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:39:53.131991   47640 main.go:141] libmachine: (test-preload-817668) DBG | unable to find current IP address of domain test-preload-817668 in network mk-test-preload-817668
	I1216 20:39:53.132065   47640 main.go:141] libmachine: (test-preload-817668) DBG | I1216 20:39:53.131948   47692 retry.go:31] will retry after 1.281892137s: waiting for domain to come up
	I1216 20:39:54.415452   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:39:54.415872   47640 main.go:141] libmachine: (test-preload-817668) DBG | unable to find current IP address of domain test-preload-817668 in network mk-test-preload-817668
	I1216 20:39:54.415891   47640 main.go:141] libmachine: (test-preload-817668) DBG | I1216 20:39:54.415853   47692 retry.go:31] will retry after 1.530224403s: waiting for domain to come up
	I1216 20:39:55.948854   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:39:55.949273   47640 main.go:141] libmachine: (test-preload-817668) DBG | unable to find current IP address of domain test-preload-817668 in network mk-test-preload-817668
	I1216 20:39:55.949289   47640 main.go:141] libmachine: (test-preload-817668) DBG | I1216 20:39:55.949237   47692 retry.go:31] will retry after 1.947772809s: waiting for domain to come up
	I1216 20:39:57.899483   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:39:57.900030   47640 main.go:141] libmachine: (test-preload-817668) DBG | unable to find current IP address of domain test-preload-817668 in network mk-test-preload-817668
	I1216 20:39:57.900057   47640 main.go:141] libmachine: (test-preload-817668) DBG | I1216 20:39:57.899973   47692 retry.go:31] will retry after 2.480308449s: waiting for domain to come up
	I1216 20:40:00.383671   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:40:00.384095   47640 main.go:141] libmachine: (test-preload-817668) DBG | unable to find current IP address of domain test-preload-817668 in network mk-test-preload-817668
	I1216 20:40:00.384120   47640 main.go:141] libmachine: (test-preload-817668) DBG | I1216 20:40:00.384053   47692 retry.go:31] will retry after 3.820785694s: waiting for domain to come up
	I1216 20:40:04.208505   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:40:04.209029   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has current primary IP address 192.168.39.211 and MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:40:04.209073   47640 main.go:141] libmachine: (test-preload-817668) found domain IP: 192.168.39.211
	I1216 20:40:04.209125   47640 main.go:141] libmachine: (test-preload-817668) reserving static IP address...
	I1216 20:40:04.209568   47640 main.go:141] libmachine: (test-preload-817668) reserved static IP address 192.168.39.211 for domain test-preload-817668
	I1216 20:40:04.209588   47640 main.go:141] libmachine: (test-preload-817668) waiting for SSH...
	I1216 20:40:04.209632   47640 main.go:141] libmachine: (test-preload-817668) DBG | found host DHCP lease matching {name: "test-preload-817668", mac: "52:54:00:e4:c7:e6", ip: "192.168.39.211"} in network mk-test-preload-817668: {Iface:virbr1 ExpiryTime:2024-12-16 21:39:58 +0000 UTC Type:0 Mac:52:54:00:e4:c7:e6 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:test-preload-817668 Clientid:01:52:54:00:e4:c7:e6}
	I1216 20:40:04.209653   47640 main.go:141] libmachine: (test-preload-817668) DBG | skip adding static IP to network mk-test-preload-817668 - found existing host DHCP lease matching {name: "test-preload-817668", mac: "52:54:00:e4:c7:e6", ip: "192.168.39.211"}
	I1216 20:40:04.209662   47640 main.go:141] libmachine: (test-preload-817668) DBG | Getting to WaitForSSH function...
	I1216 20:40:04.211874   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:40:04.212212   47640 main.go:141] libmachine: (test-preload-817668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:c7:e6", ip: ""} in network mk-test-preload-817668: {Iface:virbr1 ExpiryTime:2024-12-16 21:39:58 +0000 UTC Type:0 Mac:52:54:00:e4:c7:e6 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:test-preload-817668 Clientid:01:52:54:00:e4:c7:e6}
	I1216 20:40:04.212236   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined IP address 192.168.39.211 and MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:40:04.212393   47640 main.go:141] libmachine: (test-preload-817668) DBG | Using SSH client type: external
	I1216 20:40:04.212414   47640 main.go:141] libmachine: (test-preload-817668) DBG | Using SSH private key: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/test-preload-817668/id_rsa (-rw-------)
	I1216 20:40:04.212438   47640 main.go:141] libmachine: (test-preload-817668) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.211 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20091-7083/.minikube/machines/test-preload-817668/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1216 20:40:04.212450   47640 main.go:141] libmachine: (test-preload-817668) DBG | About to run SSH command:
	I1216 20:40:04.212459   47640 main.go:141] libmachine: (test-preload-817668) DBG | exit 0
	I1216 20:40:04.335503   47640 main.go:141] libmachine: (test-preload-817668) DBG | SSH cmd err, output: <nil>: 
	I1216 20:40:04.335881   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetConfigRaw
	I1216 20:40:04.336558   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetIP
	I1216 20:40:04.339290   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:40:04.339723   47640 main.go:141] libmachine: (test-preload-817668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:c7:e6", ip: ""} in network mk-test-preload-817668: {Iface:virbr1 ExpiryTime:2024-12-16 21:39:58 +0000 UTC Type:0 Mac:52:54:00:e4:c7:e6 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:test-preload-817668 Clientid:01:52:54:00:e4:c7:e6}
	I1216 20:40:04.339752   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined IP address 192.168.39.211 and MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:40:04.339985   47640 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/test-preload-817668/config.json ...
	I1216 20:40:04.340180   47640 machine.go:93] provisionDockerMachine start ...
	I1216 20:40:04.340199   47640 main.go:141] libmachine: (test-preload-817668) Calling .DriverName
	I1216 20:40:04.340433   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHHostname
	I1216 20:40:04.342917   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:40:04.343367   47640 main.go:141] libmachine: (test-preload-817668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:c7:e6", ip: ""} in network mk-test-preload-817668: {Iface:virbr1 ExpiryTime:2024-12-16 21:39:58 +0000 UTC Type:0 Mac:52:54:00:e4:c7:e6 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:test-preload-817668 Clientid:01:52:54:00:e4:c7:e6}
	I1216 20:40:04.343391   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined IP address 192.168.39.211 and MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:40:04.343551   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHPort
	I1216 20:40:04.343731   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHKeyPath
	I1216 20:40:04.343846   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHKeyPath
	I1216 20:40:04.343991   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHUsername
	I1216 20:40:04.344127   47640 main.go:141] libmachine: Using SSH client type: native
	I1216 20:40:04.344315   47640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1216 20:40:04.344326   47640 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 20:40:04.443772   47640 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1216 20:40:04.443803   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetMachineName
	I1216 20:40:04.444076   47640 buildroot.go:166] provisioning hostname "test-preload-817668"
	I1216 20:40:04.444107   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetMachineName
	I1216 20:40:04.444294   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHHostname
	I1216 20:40:04.447350   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:40:04.447803   47640 main.go:141] libmachine: (test-preload-817668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:c7:e6", ip: ""} in network mk-test-preload-817668: {Iface:virbr1 ExpiryTime:2024-12-16 21:39:58 +0000 UTC Type:0 Mac:52:54:00:e4:c7:e6 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:test-preload-817668 Clientid:01:52:54:00:e4:c7:e6}
	I1216 20:40:04.447831   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined IP address 192.168.39.211 and MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:40:04.448119   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHPort
	I1216 20:40:04.448367   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHKeyPath
	I1216 20:40:04.448574   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHKeyPath
	I1216 20:40:04.448737   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHUsername
	I1216 20:40:04.448899   47640 main.go:141] libmachine: Using SSH client type: native
	I1216 20:40:04.449078   47640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1216 20:40:04.449090   47640 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-817668 && echo "test-preload-817668" | sudo tee /etc/hostname
	I1216 20:40:04.563714   47640 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-817668
	
	I1216 20:40:04.563767   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHHostname
	I1216 20:40:04.566838   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:40:04.567321   47640 main.go:141] libmachine: (test-preload-817668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:c7:e6", ip: ""} in network mk-test-preload-817668: {Iface:virbr1 ExpiryTime:2024-12-16 21:39:58 +0000 UTC Type:0 Mac:52:54:00:e4:c7:e6 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:test-preload-817668 Clientid:01:52:54:00:e4:c7:e6}
	I1216 20:40:04.567358   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined IP address 192.168.39.211 and MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:40:04.567596   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHPort
	I1216 20:40:04.567777   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHKeyPath
	I1216 20:40:04.567913   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHKeyPath
	I1216 20:40:04.568031   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHUsername
	I1216 20:40:04.568171   47640 main.go:141] libmachine: Using SSH client type: native
	I1216 20:40:04.568369   47640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1216 20:40:04.568394   47640 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-817668' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-817668/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-817668' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 20:40:04.681520   47640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 20:40:04.681549   47640 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20091-7083/.minikube CaCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20091-7083/.minikube}
	I1216 20:40:04.681595   47640 buildroot.go:174] setting up certificates
	I1216 20:40:04.681607   47640 provision.go:84] configureAuth start
	I1216 20:40:04.681620   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetMachineName
	I1216 20:40:04.681930   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetIP
	I1216 20:40:04.684770   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:40:04.685069   47640 main.go:141] libmachine: (test-preload-817668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:c7:e6", ip: ""} in network mk-test-preload-817668: {Iface:virbr1 ExpiryTime:2024-12-16 21:39:58 +0000 UTC Type:0 Mac:52:54:00:e4:c7:e6 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:test-preload-817668 Clientid:01:52:54:00:e4:c7:e6}
	I1216 20:40:04.685098   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined IP address 192.168.39.211 and MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:40:04.685213   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHHostname
	I1216 20:40:04.687485   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:40:04.687736   47640 main.go:141] libmachine: (test-preload-817668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:c7:e6", ip: ""} in network mk-test-preload-817668: {Iface:virbr1 ExpiryTime:2024-12-16 21:39:58 +0000 UTC Type:0 Mac:52:54:00:e4:c7:e6 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:test-preload-817668 Clientid:01:52:54:00:e4:c7:e6}
	I1216 20:40:04.687766   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined IP address 192.168.39.211 and MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:40:04.687910   47640 provision.go:143] copyHostCerts
	I1216 20:40:04.687982   47640 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem, removing ...
	I1216 20:40:04.688000   47640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem
	I1216 20:40:04.688095   47640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem (1123 bytes)
	I1216 20:40:04.688225   47640 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem, removing ...
	I1216 20:40:04.688237   47640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem
	I1216 20:40:04.688281   47640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem (1679 bytes)
	I1216 20:40:04.688371   47640 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem, removing ...
	I1216 20:40:04.688381   47640 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem
	I1216 20:40:04.688416   47640 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem (1082 bytes)
	I1216 20:40:04.688492   47640 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem org=jenkins.test-preload-817668 san=[127.0.0.1 192.168.39.211 localhost minikube test-preload-817668]
	I1216 20:40:04.790411   47640 provision.go:177] copyRemoteCerts
	I1216 20:40:04.790482   47640 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 20:40:04.790511   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHHostname
	I1216 20:40:04.793390   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:40:04.793745   47640 main.go:141] libmachine: (test-preload-817668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:c7:e6", ip: ""} in network mk-test-preload-817668: {Iface:virbr1 ExpiryTime:2024-12-16 21:39:58 +0000 UTC Type:0 Mac:52:54:00:e4:c7:e6 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:test-preload-817668 Clientid:01:52:54:00:e4:c7:e6}
	I1216 20:40:04.793770   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined IP address 192.168.39.211 and MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:40:04.794050   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHPort
	I1216 20:40:04.794285   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHKeyPath
	I1216 20:40:04.794490   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHUsername
	I1216 20:40:04.794662   47640 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/test-preload-817668/id_rsa Username:docker}
	I1216 20:40:04.879074   47640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1216 20:40:04.911273   47640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 20:40:04.941357   47640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 20:40:04.971470   47640 provision.go:87] duration metric: took 289.849959ms to configureAuth
	I1216 20:40:04.971499   47640 buildroot.go:189] setting minikube options for container-runtime
	I1216 20:40:04.971720   47640 config.go:182] Loaded profile config "test-preload-817668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1216 20:40:04.971805   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHHostname
	I1216 20:40:04.974590   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:40:04.974956   47640 main.go:141] libmachine: (test-preload-817668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:c7:e6", ip: ""} in network mk-test-preload-817668: {Iface:virbr1 ExpiryTime:2024-12-16 21:39:58 +0000 UTC Type:0 Mac:52:54:00:e4:c7:e6 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:test-preload-817668 Clientid:01:52:54:00:e4:c7:e6}
	I1216 20:40:04.974982   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined IP address 192.168.39.211 and MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:40:04.975156   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHPort
	I1216 20:40:04.975377   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHKeyPath
	I1216 20:40:04.975590   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHKeyPath
	I1216 20:40:04.975748   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHUsername
	I1216 20:40:04.975896   47640 main.go:141] libmachine: Using SSH client type: native
	I1216 20:40:04.976134   47640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1216 20:40:04.976157   47640 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 20:40:05.207438   47640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 20:40:05.207465   47640 machine.go:96] duration metric: took 867.272652ms to provisionDockerMachine
	I1216 20:40:05.207477   47640 start.go:293] postStartSetup for "test-preload-817668" (driver="kvm2")
	I1216 20:40:05.207487   47640 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 20:40:05.207503   47640 main.go:141] libmachine: (test-preload-817668) Calling .DriverName
	I1216 20:40:05.207813   47640 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 20:40:05.207866   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHHostname
	I1216 20:40:05.210498   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:40:05.210836   47640 main.go:141] libmachine: (test-preload-817668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:c7:e6", ip: ""} in network mk-test-preload-817668: {Iface:virbr1 ExpiryTime:2024-12-16 21:39:58 +0000 UTC Type:0 Mac:52:54:00:e4:c7:e6 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:test-preload-817668 Clientid:01:52:54:00:e4:c7:e6}
	I1216 20:40:05.210873   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined IP address 192.168.39.211 and MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:40:05.211069   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHPort
	I1216 20:40:05.211275   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHKeyPath
	I1216 20:40:05.211470   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHUsername
	I1216 20:40:05.211599   47640 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/test-preload-817668/id_rsa Username:docker}
	I1216 20:40:05.290495   47640 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 20:40:05.295933   47640 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 20:40:05.295965   47640 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/addons for local assets ...
	I1216 20:40:05.296070   47640 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/files for local assets ...
	I1216 20:40:05.296178   47640 filesync.go:149] local asset: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem -> 142542.pem in /etc/ssl/certs
	I1216 20:40:05.296359   47640 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 20:40:05.307265   47640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /etc/ssl/certs/142542.pem (1708 bytes)
	I1216 20:40:05.335208   47640 start.go:296] duration metric: took 127.718199ms for postStartSetup
	I1216 20:40:05.335294   47640 fix.go:56] duration metric: took 18.881224561s for fixHost
	I1216 20:40:05.335321   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHHostname
	I1216 20:40:05.338525   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:40:05.338920   47640 main.go:141] libmachine: (test-preload-817668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:c7:e6", ip: ""} in network mk-test-preload-817668: {Iface:virbr1 ExpiryTime:2024-12-16 21:39:58 +0000 UTC Type:0 Mac:52:54:00:e4:c7:e6 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:test-preload-817668 Clientid:01:52:54:00:e4:c7:e6}
	I1216 20:40:05.338951   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined IP address 192.168.39.211 and MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:40:05.339123   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHPort
	I1216 20:40:05.339319   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHKeyPath
	I1216 20:40:05.339516   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHKeyPath
	I1216 20:40:05.339660   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHUsername
	I1216 20:40:05.339827   47640 main.go:141] libmachine: Using SSH client type: native
	I1216 20:40:05.340045   47640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I1216 20:40:05.340059   47640 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 20:40:05.440626   47640 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734381605.411434747
	
	I1216 20:40:05.440667   47640 fix.go:216] guest clock: 1734381605.411434747
	I1216 20:40:05.440677   47640 fix.go:229] Guest: 2024-12-16 20:40:05.411434747 +0000 UTC Remote: 2024-12-16 20:40:05.335301995 +0000 UTC m=+22.957234465 (delta=76.132752ms)
	I1216 20:40:05.440724   47640 fix.go:200] guest clock delta is within tolerance: 76.132752ms
	I1216 20:40:05.440732   47640 start.go:83] releasing machines lock for "test-preload-817668", held for 18.986679319s
	I1216 20:40:05.440762   47640 main.go:141] libmachine: (test-preload-817668) Calling .DriverName
	I1216 20:40:05.441054   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetIP
	I1216 20:40:05.444065   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:40:05.444388   47640 main.go:141] libmachine: (test-preload-817668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:c7:e6", ip: ""} in network mk-test-preload-817668: {Iface:virbr1 ExpiryTime:2024-12-16 21:39:58 +0000 UTC Type:0 Mac:52:54:00:e4:c7:e6 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:test-preload-817668 Clientid:01:52:54:00:e4:c7:e6}
	I1216 20:40:05.444422   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined IP address 192.168.39.211 and MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:40:05.444583   47640 main.go:141] libmachine: (test-preload-817668) Calling .DriverName
	I1216 20:40:05.445126   47640 main.go:141] libmachine: (test-preload-817668) Calling .DriverName
	I1216 20:40:05.445336   47640 main.go:141] libmachine: (test-preload-817668) Calling .DriverName
	I1216 20:40:05.445438   47640 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 20:40:05.445484   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHHostname
	I1216 20:40:05.445550   47640 ssh_runner.go:195] Run: cat /version.json
	I1216 20:40:05.445577   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHHostname
	I1216 20:40:05.448215   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:40:05.448507   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:40:05.448592   47640 main.go:141] libmachine: (test-preload-817668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:c7:e6", ip: ""} in network mk-test-preload-817668: {Iface:virbr1 ExpiryTime:2024-12-16 21:39:58 +0000 UTC Type:0 Mac:52:54:00:e4:c7:e6 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:test-preload-817668 Clientid:01:52:54:00:e4:c7:e6}
	I1216 20:40:05.448639   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined IP address 192.168.39.211 and MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:40:05.448779   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHPort
	I1216 20:40:05.448987   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHKeyPath
	I1216 20:40:05.449056   47640 main.go:141] libmachine: (test-preload-817668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:c7:e6", ip: ""} in network mk-test-preload-817668: {Iface:virbr1 ExpiryTime:2024-12-16 21:39:58 +0000 UTC Type:0 Mac:52:54:00:e4:c7:e6 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:test-preload-817668 Clientid:01:52:54:00:e4:c7:e6}
	I1216 20:40:05.449076   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined IP address 192.168.39.211 and MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:40:05.449144   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHUsername
	I1216 20:40:05.449230   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHPort
	I1216 20:40:05.449258   47640 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/test-preload-817668/id_rsa Username:docker}
	I1216 20:40:05.449372   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHKeyPath
	I1216 20:40:05.449500   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHUsername
	I1216 20:40:05.449613   47640 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/test-preload-817668/id_rsa Username:docker}
	I1216 20:40:05.547868   47640 ssh_runner.go:195] Run: systemctl --version
	I1216 20:40:05.554352   47640 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 20:40:05.706871   47640 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 20:40:05.714520   47640 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 20:40:05.714609   47640 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 20:40:05.733421   47640 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 20:40:05.733455   47640 start.go:495] detecting cgroup driver to use...
	I1216 20:40:05.733532   47640 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 20:40:05.750367   47640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 20:40:05.765833   47640 docker.go:217] disabling cri-docker service (if available) ...
	I1216 20:40:05.765905   47640 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 20:40:05.781146   47640 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 20:40:05.796451   47640 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 20:40:05.915411   47640 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 20:40:06.069982   47640 docker.go:233] disabling docker service ...
	I1216 20:40:06.070068   47640 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 20:40:06.084908   47640 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 20:40:06.099178   47640 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 20:40:06.242581   47640 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 20:40:06.366077   47640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 20:40:06.381673   47640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 20:40:06.402253   47640 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I1216 20:40:06.402310   47640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:40:06.413616   47640 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 20:40:06.413688   47640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:40:06.424754   47640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:40:06.436062   47640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:40:06.447508   47640 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 20:40:06.459221   47640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:40:06.470843   47640 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:40:06.490178   47640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:40:06.501648   47640 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 20:40:06.512116   47640 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 20:40:06.512177   47640 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 20:40:06.526026   47640 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 20:40:06.536309   47640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:40:06.660440   47640 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 20:40:06.758061   47640 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 20:40:06.758138   47640 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 20:40:06.763157   47640 start.go:563] Will wait 60s for crictl version
	I1216 20:40:06.763227   47640 ssh_runner.go:195] Run: which crictl
	I1216 20:40:06.767464   47640 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 20:40:06.809960   47640 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 20:40:06.810068   47640 ssh_runner.go:195] Run: crio --version
	I1216 20:40:06.839571   47640 ssh_runner.go:195] Run: crio --version
	I1216 20:40:06.871026   47640 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I1216 20:40:06.872413   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetIP
	I1216 20:40:06.875159   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:40:06.875539   47640 main.go:141] libmachine: (test-preload-817668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:c7:e6", ip: ""} in network mk-test-preload-817668: {Iface:virbr1 ExpiryTime:2024-12-16 21:39:58 +0000 UTC Type:0 Mac:52:54:00:e4:c7:e6 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:test-preload-817668 Clientid:01:52:54:00:e4:c7:e6}
	I1216 20:40:06.875561   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined IP address 192.168.39.211 and MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:40:06.875777   47640 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1216 20:40:06.880297   47640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 20:40:06.893432   47640 kubeadm.go:883] updating cluster {Name:test-preload-817668 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-817668 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 20:40:06.893577   47640 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1216 20:40:06.893625   47640 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 20:40:06.931070   47640 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1216 20:40:06.931164   47640 ssh_runner.go:195] Run: which lz4
	I1216 20:40:06.935615   47640 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 20:40:06.940092   47640 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 20:40:06.940141   47640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I1216 20:40:08.695833   47640 crio.go:462] duration metric: took 1.760258819s to copy over tarball
	I1216 20:40:08.695910   47640 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 20:40:11.233355   47640 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.53741636s)
	I1216 20:40:11.233389   47640 crio.go:469] duration metric: took 2.537525743s to extract the tarball
	I1216 20:40:11.233398   47640 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 20:40:11.275684   47640 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 20:40:11.320723   47640 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1216 20:40:11.320755   47640 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1216 20:40:11.320864   47640 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1216 20:40:11.320914   47640 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1216 20:40:11.320874   47640 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1216 20:40:11.320864   47640 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:40:11.320865   47640 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1216 20:40:11.320897   47640 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1216 20:40:11.320894   47640 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1216 20:40:11.320897   47640 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1216 20:40:11.322631   47640 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1216 20:40:11.322644   47640 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1216 20:40:11.322649   47640 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1216 20:40:11.322658   47640 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1216 20:40:11.322662   47640 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1216 20:40:11.322633   47640 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:40:11.322637   47640 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1216 20:40:11.322634   47640 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1216 20:40:11.535201   47640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I1216 20:40:11.568913   47640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1216 20:40:11.569643   47640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1216 20:40:11.585327   47640 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I1216 20:40:11.585375   47640 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I1216 20:40:11.585425   47640 ssh_runner.go:195] Run: which crictl
	I1216 20:40:11.592818   47640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I1216 20:40:11.659432   47640 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I1216 20:40:11.659468   47640 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I1216 20:40:11.659480   47640 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I1216 20:40:11.659493   47640 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1216 20:40:11.659527   47640 ssh_runner.go:195] Run: which crictl
	I1216 20:40:11.659531   47640 ssh_runner.go:195] Run: which crictl
	I1216 20:40:11.659574   47640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1216 20:40:11.682553   47640 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I1216 20:40:11.682613   47640 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I1216 20:40:11.682659   47640 ssh_runner.go:195] Run: which crictl
	I1216 20:40:11.682714   47640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1216 20:40:11.682660   47640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1216 20:40:11.682855   47640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I1216 20:40:11.684029   47640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1216 20:40:11.704930   47640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I1216 20:40:11.714729   47640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1216 20:40:11.780322   47640 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:40:11.861447   47640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1216 20:40:11.861479   47640 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I1216 20:40:11.861517   47640 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1216 20:40:11.861557   47640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1216 20:40:11.861593   47640 ssh_runner.go:195] Run: which crictl
	I1216 20:40:11.861621   47640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1216 20:40:11.861652   47640 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I1216 20:40:11.861677   47640 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1216 20:40:11.861712   47640 ssh_runner.go:195] Run: which crictl
	I1216 20:40:11.879050   47640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1216 20:40:11.879061   47640 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I1216 20:40:11.879101   47640 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I1216 20:40:11.879163   47640 ssh_runner.go:195] Run: which crictl
	I1216 20:40:12.042249   47640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1216 20:40:12.042299   47640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1216 20:40:12.042321   47640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1216 20:40:12.042249   47640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1216 20:40:12.042341   47640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1216 20:40:12.042373   47640 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I1216 20:40:12.042376   47640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1216 20:40:12.042466   47640 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1216 20:40:12.183511   47640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1216 20:40:12.183532   47640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1216 20:40:12.183579   47640 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I1216 20:40:12.183599   47640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I1216 20:40:12.183619   47640 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1216 20:40:12.183665   47640 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1216 20:40:12.183672   47640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1216 20:40:12.187629   47640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1216 20:40:12.187708   47640 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I1216 20:40:12.187744   47640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1216 20:40:12.187784   47640 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1216 20:40:12.284645   47640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1216 20:40:12.284691   47640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1216 20:40:12.284726   47640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I1216 20:40:14.625006   47640 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4: (2.441311873s)
	I1216 20:40:14.625042   47640 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I1216 20:40:14.625080   47640 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1216 20:40:14.625114   47640 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4: (2.437455836s)
	I1216 20:40:14.625162   47640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I1216 20:40:14.625181   47640 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1216 20:40:14.625216   47640 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4: (2.437452428s)
	I1216 20:40:14.625255   47640 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I1216 20:40:14.625265   47640 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: (2.437464344s)
	I1216 20:40:14.625284   47640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I1216 20:40:14.625323   47640 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4: (2.340648509s)
	I1216 20:40:14.625360   47640 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1216 20:40:14.625361   47640 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I1216 20:40:14.625388   47640 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6: (2.340684271s)
	I1216 20:40:14.625419   47640 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I1216 20:40:14.625450   47640 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1216 20:40:14.625486   47640 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1216 20:40:16.802874   47640 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.177681831s)
	I1216 20:40:16.802915   47640 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1216 20:40:16.802924   47640 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4: (2.177724533s)
	I1216 20:40:16.802940   47640 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I1216 20:40:16.802965   47640 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I1216 20:40:16.802983   47640 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4: (2.177585251s)
	I1216 20:40:16.802999   47640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I1216 20:40:16.803016   47640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I1216 20:40:16.803046   47640 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: (2.177545331s)
	I1216 20:40:16.803051   47640 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1216 20:40:16.803049   47640 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4: (2.177568616s)
	I1216 20:40:16.803075   47640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I1216 20:40:16.803059   47640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I1216 20:40:16.946676   47640 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I1216 20:40:16.946787   47640 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I1216 20:40:16.946831   47640 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I1216 20:40:16.946893   47640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I1216 20:40:17.794409   47640 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I1216 20:40:17.794462   47640 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1216 20:40:17.794514   47640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1216 20:40:18.537967   47640 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I1216 20:40:18.538004   47640 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1216 20:40:18.538049   47640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I1216 20:40:18.993628   47640 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1216 20:40:18.993669   47640 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1216 20:40:18.993714   47640 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1216 20:40:19.441317   47640 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I1216 20:40:19.441378   47640 cache_images.go:123] Successfully loaded all cached images
	I1216 20:40:19.441385   47640 cache_images.go:92] duration metric: took 8.120615848s to LoadCachedImages
	I1216 20:40:19.441400   47640 kubeadm.go:934] updating node { 192.168.39.211 8443 v1.24.4 crio true true} ...
	I1216 20:40:19.441516   47640 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-817668 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-817668 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 20:40:19.441578   47640 ssh_runner.go:195] Run: crio config
	I1216 20:40:19.493554   47640 cni.go:84] Creating CNI manager for ""
	I1216 20:40:19.493574   47640 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 20:40:19.493585   47640 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 20:40:19.493612   47640 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.211 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-817668 NodeName:test-preload-817668 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.211 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 20:40:19.493738   47640 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.211
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-817668"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.211
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.211"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 20:40:19.493799   47640 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I1216 20:40:19.505218   47640 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 20:40:19.505301   47640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 20:40:19.515870   47640 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1216 20:40:19.534089   47640 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 20:40:19.554464   47640 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I1216 20:40:19.574543   47640 ssh_runner.go:195] Run: grep 192.168.39.211	control-plane.minikube.internal$ /etc/hosts
	I1216 20:40:19.579052   47640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.211	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 20:40:19.594344   47640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:40:19.721438   47640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 20:40:19.741116   47640 certs.go:68] Setting up /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/test-preload-817668 for IP: 192.168.39.211
	I1216 20:40:19.741143   47640 certs.go:194] generating shared ca certs ...
	I1216 20:40:19.741172   47640 certs.go:226] acquiring lock for ca certs: {Name:mk7f8f83a04be3d39897a025f51d4d8228b5a509 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:40:19.741364   47640 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key
	I1216 20:40:19.741421   47640 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key
	I1216 20:40:19.741435   47640 certs.go:256] generating profile certs ...
	I1216 20:40:19.741560   47640 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/test-preload-817668/client.key
	I1216 20:40:19.741642   47640 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/test-preload-817668/apiserver.key.de5d4324
	I1216 20:40:19.741680   47640 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/test-preload-817668/proxy-client.key
	I1216 20:40:19.741828   47640 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem (1338 bytes)
	W1216 20:40:19.741876   47640 certs.go:480] ignoring /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254_empty.pem, impossibly tiny 0 bytes
	I1216 20:40:19.741890   47640 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 20:40:19.741925   47640 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem (1082 bytes)
	I1216 20:40:19.741957   47640 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem (1123 bytes)
	I1216 20:40:19.741988   47640 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem (1679 bytes)
	I1216 20:40:19.742060   47640 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem (1708 bytes)
	I1216 20:40:19.742927   47640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 20:40:19.781901   47640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 20:40:19.818520   47640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 20:40:19.852663   47640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 20:40:19.903055   47640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/test-preload-817668/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1216 20:40:19.942792   47640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/test-preload-817668/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 20:40:19.980352   47640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/test-preload-817668/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 20:40:20.006356   47640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/test-preload-817668/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 20:40:20.033310   47640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem --> /usr/share/ca-certificates/14254.pem (1338 bytes)
	I1216 20:40:20.058594   47640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /usr/share/ca-certificates/142542.pem (1708 bytes)
	I1216 20:40:20.083459   47640 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 20:40:20.108257   47640 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 20:40:20.126627   47640 ssh_runner.go:195] Run: openssl version
	I1216 20:40:20.132727   47640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14254.pem && ln -fs /usr/share/ca-certificates/14254.pem /etc/ssl/certs/14254.pem"
	I1216 20:40:20.144292   47640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14254.pem
	I1216 20:40:20.149759   47640 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 19:42 /usr/share/ca-certificates/14254.pem
	I1216 20:40:20.149820   47640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14254.pem
	I1216 20:40:20.156453   47640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14254.pem /etc/ssl/certs/51391683.0"
	I1216 20:40:20.168380   47640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142542.pem && ln -fs /usr/share/ca-certificates/142542.pem /etc/ssl/certs/142542.pem"
	I1216 20:40:20.180296   47640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142542.pem
	I1216 20:40:20.185315   47640 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 19:42 /usr/share/ca-certificates/142542.pem
	I1216 20:40:20.185385   47640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142542.pem
	I1216 20:40:20.191424   47640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142542.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 20:40:20.202827   47640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 20:40:20.214768   47640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:40:20.220055   47640 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:40:20.220124   47640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:40:20.226450   47640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 20:40:20.238416   47640 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 20:40:20.243776   47640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 20:40:20.250554   47640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 20:40:20.257024   47640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 20:40:20.263503   47640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 20:40:20.269778   47640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 20:40:20.276090   47640 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 20:40:20.282475   47640 kubeadm.go:392] StartCluster: {Name:test-preload-817668 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-817668 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 20:40:20.282558   47640 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 20:40:20.282609   47640 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 20:40:20.327316   47640 cri.go:89] found id: ""
	I1216 20:40:20.327393   47640 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 20:40:20.338506   47640 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1216 20:40:20.338525   47640 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1216 20:40:20.338562   47640 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 20:40:20.349406   47640 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 20:40:20.349836   47640 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-817668" does not appear in /home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 20:40:20.350025   47640 kubeconfig.go:62] /home/jenkins/minikube-integration/20091-7083/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-817668" cluster setting kubeconfig missing "test-preload-817668" context setting]
	I1216 20:40:20.350357   47640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/kubeconfig: {Name:mk67073c6dc9abd712825d4490d6430745897f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:40:20.350998   47640 kapi.go:59] client config for test-preload-817668: &rest.Config{Host:"https://192.168.39.211:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20091-7083/.minikube/profiles/test-preload-817668/client.crt", KeyFile:"/home/jenkins/minikube-integration/20091-7083/.minikube/profiles/test-preload-817668/client.key", CAFile:"/home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x244c9c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 20:40:20.351756   47640 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 20:40:20.362625   47640 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.211
	I1216 20:40:20.362664   47640 kubeadm.go:1160] stopping kube-system containers ...
	I1216 20:40:20.362676   47640 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 20:40:20.362719   47640 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 20:40:20.407189   47640 cri.go:89] found id: ""
	I1216 20:40:20.407274   47640 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 20:40:20.425615   47640 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 20:40:20.436622   47640 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 20:40:20.436643   47640 kubeadm.go:157] found existing configuration files:
	
	I1216 20:40:20.436690   47640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 20:40:20.447132   47640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 20:40:20.447198   47640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 20:40:20.458314   47640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 20:40:20.472178   47640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 20:40:20.472252   47640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 20:40:20.487673   47640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 20:40:20.498581   47640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 20:40:20.498668   47640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 20:40:20.509789   47640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 20:40:20.521166   47640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 20:40:20.521228   47640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 20:40:20.532787   47640 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 20:40:20.544300   47640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:40:20.659961   47640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:40:21.400380   47640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:40:21.681034   47640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:40:21.758999   47640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:40:21.863456   47640 api_server.go:52] waiting for apiserver process to appear ...
	I1216 20:40:21.863555   47640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:40:22.364056   47640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:40:22.864493   47640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:40:22.883897   47640 api_server.go:72] duration metric: took 1.02042107s to wait for apiserver process to appear ...
	I1216 20:40:22.883929   47640 api_server.go:88] waiting for apiserver healthz status ...
	I1216 20:40:22.883953   47640 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1216 20:40:22.884549   47640 api_server.go:269] stopped: https://192.168.39.211:8443/healthz: Get "https://192.168.39.211:8443/healthz": dial tcp 192.168.39.211:8443: connect: connection refused
	I1216 20:40:23.384435   47640 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1216 20:40:27.057404   47640 api_server.go:279] https://192.168.39.211:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 20:40:27.057433   47640 api_server.go:103] status: https://192.168.39.211:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 20:40:27.057447   47640 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1216 20:40:27.102379   47640 api_server.go:279] https://192.168.39.211:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 20:40:27.102408   47640 api_server.go:103] status: https://192.168.39.211:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 20:40:27.384885   47640 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1216 20:40:27.390846   47640 api_server.go:279] https://192.168.39.211:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 20:40:27.390874   47640 api_server.go:103] status: https://192.168.39.211:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 20:40:27.884707   47640 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1216 20:40:27.893795   47640 api_server.go:279] https://192.168.39.211:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 20:40:27.893827   47640 api_server.go:103] status: https://192.168.39.211:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 20:40:28.384473   47640 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1216 20:40:28.389811   47640 api_server.go:279] https://192.168.39.211:8443/healthz returned 200:
	ok
	I1216 20:40:28.399581   47640 api_server.go:141] control plane version: v1.24.4
	I1216 20:40:28.399609   47640 api_server.go:131] duration metric: took 5.515673722s to wait for apiserver health ...
	I1216 20:40:28.399617   47640 cni.go:84] Creating CNI manager for ""
	I1216 20:40:28.399623   47640 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 20:40:28.401440   47640 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 20:40:28.402868   47640 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 20:40:28.415171   47640 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 20:40:28.437031   47640 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 20:40:28.437133   47640 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 20:40:28.437151   47640 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 20:40:28.448875   47640 system_pods.go:59] 7 kube-system pods found
	I1216 20:40:28.448931   47640 system_pods.go:61] "coredns-6d4b75cb6d-9ngrd" [71ec151f-cacf-44bb-8665-448403f69b44] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 20:40:28.448939   47640 system_pods.go:61] "etcd-test-preload-817668" [0b12ccd3-e72c-4a10-ba17-903effd01629] Running
	I1216 20:40:28.448949   47640 system_pods.go:61] "kube-apiserver-test-preload-817668" [2a2060d9-ba5f-4138-9bc2-55fe4d6c62bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 20:40:28.448963   47640 system_pods.go:61] "kube-controller-manager-test-preload-817668" [4736a316-caf2-4b8c-acf2-87c686cd9b42] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 20:40:28.448973   47640 system_pods.go:61] "kube-proxy-mc7d2" [56a47da7-ace5-4721-915a-1139bd993681] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 20:40:28.448983   47640 system_pods.go:61] "kube-scheduler-test-preload-817668" [3f285193-67d8-4843-ae56-6af026426e2f] Running
	I1216 20:40:28.448992   47640 system_pods.go:61] "storage-provisioner" [df2461a8-d94a-41ce-af4e-88104696317f] Running
	I1216 20:40:28.449000   47640 system_pods.go:74] duration metric: took 11.943891ms to wait for pod list to return data ...
	I1216 20:40:28.449013   47640 node_conditions.go:102] verifying NodePressure condition ...
	I1216 20:40:28.452765   47640 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 20:40:28.452796   47640 node_conditions.go:123] node cpu capacity is 2
	I1216 20:40:28.452810   47640 node_conditions.go:105] duration metric: took 3.788339ms to run NodePressure ...
	I1216 20:40:28.452830   47640 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:40:28.678089   47640 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1216 20:40:28.684620   47640 kubeadm.go:739] kubelet initialised
	I1216 20:40:28.684643   47640 kubeadm.go:740] duration metric: took 6.532592ms waiting for restarted kubelet to initialise ...
	I1216 20:40:28.684652   47640 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 20:40:28.689294   47640 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-9ngrd" in "kube-system" namespace to be "Ready" ...
	I1216 20:40:28.700676   47640 pod_ready.go:98] node "test-preload-817668" hosting pod "coredns-6d4b75cb6d-9ngrd" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-817668" has status "Ready":"False"
	I1216 20:40:28.700703   47640 pod_ready.go:82] duration metric: took 11.382616ms for pod "coredns-6d4b75cb6d-9ngrd" in "kube-system" namespace to be "Ready" ...
	E1216 20:40:28.700712   47640 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-817668" hosting pod "coredns-6d4b75cb6d-9ngrd" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-817668" has status "Ready":"False"
	I1216 20:40:28.700719   47640 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-817668" in "kube-system" namespace to be "Ready" ...
	I1216 20:40:28.708893   47640 pod_ready.go:98] node "test-preload-817668" hosting pod "etcd-test-preload-817668" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-817668" has status "Ready":"False"
	I1216 20:40:28.708919   47640 pod_ready.go:82] duration metric: took 8.191647ms for pod "etcd-test-preload-817668" in "kube-system" namespace to be "Ready" ...
	E1216 20:40:28.708929   47640 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-817668" hosting pod "etcd-test-preload-817668" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-817668" has status "Ready":"False"
	I1216 20:40:28.708935   47640 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-817668" in "kube-system" namespace to be "Ready" ...
	I1216 20:40:28.735037   47640 pod_ready.go:98] node "test-preload-817668" hosting pod "kube-apiserver-test-preload-817668" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-817668" has status "Ready":"False"
	I1216 20:40:28.735064   47640 pod_ready.go:82] duration metric: took 26.120607ms for pod "kube-apiserver-test-preload-817668" in "kube-system" namespace to be "Ready" ...
	E1216 20:40:28.735074   47640 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-817668" hosting pod "kube-apiserver-test-preload-817668" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-817668" has status "Ready":"False"
	I1216 20:40:28.735082   47640 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-817668" in "kube-system" namespace to be "Ready" ...
	I1216 20:40:28.841414   47640 pod_ready.go:98] node "test-preload-817668" hosting pod "kube-controller-manager-test-preload-817668" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-817668" has status "Ready":"False"
	I1216 20:40:28.841449   47640 pod_ready.go:82] duration metric: took 106.356402ms for pod "kube-controller-manager-test-preload-817668" in "kube-system" namespace to be "Ready" ...
	E1216 20:40:28.841462   47640 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-817668" hosting pod "kube-controller-manager-test-preload-817668" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-817668" has status "Ready":"False"
	I1216 20:40:28.841472   47640 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-mc7d2" in "kube-system" namespace to be "Ready" ...
	I1216 20:40:29.241285   47640 pod_ready.go:98] node "test-preload-817668" hosting pod "kube-proxy-mc7d2" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-817668" has status "Ready":"False"
	I1216 20:40:29.241323   47640 pod_ready.go:82] duration metric: took 399.840122ms for pod "kube-proxy-mc7d2" in "kube-system" namespace to be "Ready" ...
	E1216 20:40:29.241332   47640 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-817668" hosting pod "kube-proxy-mc7d2" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-817668" has status "Ready":"False"
	I1216 20:40:29.241340   47640 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-817668" in "kube-system" namespace to be "Ready" ...
	I1216 20:40:29.641469   47640 pod_ready.go:98] node "test-preload-817668" hosting pod "kube-scheduler-test-preload-817668" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-817668" has status "Ready":"False"
	I1216 20:40:29.641501   47640 pod_ready.go:82] duration metric: took 400.148777ms for pod "kube-scheduler-test-preload-817668" in "kube-system" namespace to be "Ready" ...
	E1216 20:40:29.641511   47640 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-817668" hosting pod "kube-scheduler-test-preload-817668" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-817668" has status "Ready":"False"
	I1216 20:40:29.641518   47640 pod_ready.go:39] duration metric: took 956.858678ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 20:40:29.641533   47640 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 20:40:29.654886   47640 ops.go:34] apiserver oom_adj: -16
	I1216 20:40:29.654916   47640 kubeadm.go:597] duration metric: took 9.316384929s to restartPrimaryControlPlane
	I1216 20:40:29.654925   47640 kubeadm.go:394] duration metric: took 9.372466614s to StartCluster
	I1216 20:40:29.654942   47640 settings.go:142] acquiring lock: {Name:mke62e1d1fa6bfae09410847a3fc6f95d0bbbd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:40:29.655007   47640 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 20:40:29.655606   47640 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/kubeconfig: {Name:mk67073c6dc9abd712825d4490d6430745897f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:40:29.655810   47640 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 20:40:29.655898   47640 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 20:40:29.656000   47640 addons.go:69] Setting storage-provisioner=true in profile "test-preload-817668"
	I1216 20:40:29.656025   47640 addons.go:234] Setting addon storage-provisioner=true in "test-preload-817668"
	I1216 20:40:29.656032   47640 config.go:182] Loaded profile config "test-preload-817668": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1216 20:40:29.656047   47640 addons.go:69] Setting default-storageclass=true in profile "test-preload-817668"
	W1216 20:40:29.656037   47640 addons.go:243] addon storage-provisioner should already be in state true
	I1216 20:40:29.656075   47640 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-817668"
	I1216 20:40:29.656110   47640 host.go:66] Checking if "test-preload-817668" exists ...
	I1216 20:40:29.656439   47640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 20:40:29.656473   47640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:40:29.656538   47640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 20:40:29.656581   47640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:40:29.657941   47640 out.go:177] * Verifying Kubernetes components...
	I1216 20:40:29.659372   47640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:40:29.671897   47640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36031
	I1216 20:40:29.671962   47640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46547
	I1216 20:40:29.672410   47640 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:40:29.672551   47640 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:40:29.672934   47640 main.go:141] libmachine: Using API Version  1
	I1216 20:40:29.672953   47640 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:40:29.673085   47640 main.go:141] libmachine: Using API Version  1
	I1216 20:40:29.673111   47640 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:40:29.673278   47640 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:40:29.673482   47640 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:40:29.673670   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetState
	I1216 20:40:29.673846   47640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 20:40:29.673901   47640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:40:29.675890   47640 kapi.go:59] client config for test-preload-817668: &rest.Config{Host:"https://192.168.39.211:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20091-7083/.minikube/profiles/test-preload-817668/client.crt", KeyFile:"/home/jenkins/minikube-integration/20091-7083/.minikube/profiles/test-preload-817668/client.key", CAFile:"/home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x244c9c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 20:40:29.676177   47640 addons.go:234] Setting addon default-storageclass=true in "test-preload-817668"
	W1216 20:40:29.676194   47640 addons.go:243] addon default-storageclass should already be in state true
	I1216 20:40:29.676234   47640 host.go:66] Checking if "test-preload-817668" exists ...
	I1216 20:40:29.676497   47640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 20:40:29.676547   47640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:40:29.691336   47640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35115
	I1216 20:40:29.691771   47640 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:40:29.692241   47640 main.go:141] libmachine: Using API Version  1
	I1216 20:40:29.692273   47640 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:40:29.692627   47640 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:40:29.693141   47640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 20:40:29.693190   47640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:40:29.694301   47640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40303
	I1216 20:40:29.723964   47640 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:40:29.724558   47640 main.go:141] libmachine: Using API Version  1
	I1216 20:40:29.724591   47640 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:40:29.725007   47640 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:40:29.725331   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetState
	I1216 20:40:29.727327   47640 main.go:141] libmachine: (test-preload-817668) Calling .DriverName
	I1216 20:40:29.729849   47640 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:40:29.731506   47640 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 20:40:29.731527   47640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 20:40:29.731546   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHHostname
	I1216 20:40:29.735087   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:40:29.735564   47640 main.go:141] libmachine: (test-preload-817668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:c7:e6", ip: ""} in network mk-test-preload-817668: {Iface:virbr1 ExpiryTime:2024-12-16 21:39:58 +0000 UTC Type:0 Mac:52:54:00:e4:c7:e6 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:test-preload-817668 Clientid:01:52:54:00:e4:c7:e6}
	I1216 20:40:29.735596   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined IP address 192.168.39.211 and MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:40:29.735803   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHPort
	I1216 20:40:29.736028   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHKeyPath
	I1216 20:40:29.736196   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHUsername
	I1216 20:40:29.736331   47640 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/test-preload-817668/id_rsa Username:docker}
	I1216 20:40:29.739042   47640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36875
	I1216 20:40:29.739558   47640 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:40:29.740015   47640 main.go:141] libmachine: Using API Version  1
	I1216 20:40:29.740040   47640 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:40:29.740368   47640 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:40:29.740565   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetState
	I1216 20:40:29.742320   47640 main.go:141] libmachine: (test-preload-817668) Calling .DriverName
	I1216 20:40:29.742535   47640 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 20:40:29.742551   47640 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 20:40:29.742566   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHHostname
	I1216 20:40:29.745376   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:40:29.745875   47640 main.go:141] libmachine: (test-preload-817668) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e4:c7:e6", ip: ""} in network mk-test-preload-817668: {Iface:virbr1 ExpiryTime:2024-12-16 21:39:58 +0000 UTC Type:0 Mac:52:54:00:e4:c7:e6 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:test-preload-817668 Clientid:01:52:54:00:e4:c7:e6}
	I1216 20:40:29.745906   47640 main.go:141] libmachine: (test-preload-817668) DBG | domain test-preload-817668 has defined IP address 192.168.39.211 and MAC address 52:54:00:e4:c7:e6 in network mk-test-preload-817668
	I1216 20:40:29.746092   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHPort
	I1216 20:40:29.746311   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHKeyPath
	I1216 20:40:29.746486   47640 main.go:141] libmachine: (test-preload-817668) Calling .GetSSHUsername
	I1216 20:40:29.746652   47640 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/test-preload-817668/id_rsa Username:docker}
	I1216 20:40:29.847116   47640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 20:40:29.866224   47640 node_ready.go:35] waiting up to 6m0s for node "test-preload-817668" to be "Ready" ...
	I1216 20:40:29.925303   47640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 20:40:29.947502   47640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 20:40:30.887572   47640 main.go:141] libmachine: Making call to close driver server
	I1216 20:40:30.887599   47640 main.go:141] libmachine: (test-preload-817668) Calling .Close
	I1216 20:40:30.887885   47640 main.go:141] libmachine: Successfully made call to close driver server
	I1216 20:40:30.887905   47640 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 20:40:30.887907   47640 main.go:141] libmachine: (test-preload-817668) DBG | Closing plugin on server side
	I1216 20:40:30.887913   47640 main.go:141] libmachine: Making call to close driver server
	I1216 20:40:30.887921   47640 main.go:141] libmachine: (test-preload-817668) Calling .Close
	I1216 20:40:30.888130   47640 main.go:141] libmachine: Successfully made call to close driver server
	I1216 20:40:30.888146   47640 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 20:40:30.895729   47640 main.go:141] libmachine: Making call to close driver server
	I1216 20:40:30.895750   47640 main.go:141] libmachine: (test-preload-817668) Calling .Close
	I1216 20:40:30.896071   47640 main.go:141] libmachine: (test-preload-817668) DBG | Closing plugin on server side
	I1216 20:40:30.896109   47640 main.go:141] libmachine: Successfully made call to close driver server
	I1216 20:40:30.896127   47640 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 20:40:30.924303   47640 main.go:141] libmachine: Making call to close driver server
	I1216 20:40:30.924331   47640 main.go:141] libmachine: (test-preload-817668) Calling .Close
	I1216 20:40:30.924641   47640 main.go:141] libmachine: Successfully made call to close driver server
	I1216 20:40:30.924660   47640 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 20:40:30.924670   47640 main.go:141] libmachine: Making call to close driver server
	I1216 20:40:30.924678   47640 main.go:141] libmachine: (test-preload-817668) Calling .Close
	I1216 20:40:30.924643   47640 main.go:141] libmachine: (test-preload-817668) DBG | Closing plugin on server side
	I1216 20:40:30.924957   47640 main.go:141] libmachine: (test-preload-817668) DBG | Closing plugin on server side
	I1216 20:40:30.924991   47640 main.go:141] libmachine: Successfully made call to close driver server
	I1216 20:40:30.925006   47640 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 20:40:30.927060   47640 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1216 20:40:30.928363   47640 addons.go:510] duration metric: took 1.272477234s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1216 20:40:31.870588   47640 node_ready.go:53] node "test-preload-817668" has status "Ready":"False"
	I1216 20:40:34.369814   47640 node_ready.go:53] node "test-preload-817668" has status "Ready":"False"
	I1216 20:40:36.370934   47640 node_ready.go:53] node "test-preload-817668" has status "Ready":"False"
	I1216 20:40:37.872158   47640 node_ready.go:49] node "test-preload-817668" has status "Ready":"True"
	I1216 20:40:37.872181   47640 node_ready.go:38] duration metric: took 8.005923831s for node "test-preload-817668" to be "Ready" ...
	I1216 20:40:37.872190   47640 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 20:40:37.879402   47640 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-9ngrd" in "kube-system" namespace to be "Ready" ...
	I1216 20:40:38.385810   47640 pod_ready.go:93] pod "coredns-6d4b75cb6d-9ngrd" in "kube-system" namespace has status "Ready":"True"
	I1216 20:40:38.385838   47640 pod_ready.go:82] duration metric: took 506.411676ms for pod "coredns-6d4b75cb6d-9ngrd" in "kube-system" namespace to be "Ready" ...
	I1216 20:40:38.385847   47640 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-817668" in "kube-system" namespace to be "Ready" ...
	I1216 20:40:38.390693   47640 pod_ready.go:93] pod "etcd-test-preload-817668" in "kube-system" namespace has status "Ready":"True"
	I1216 20:40:38.390718   47640 pod_ready.go:82] duration metric: took 4.864843ms for pod "etcd-test-preload-817668" in "kube-system" namespace to be "Ready" ...
	I1216 20:40:38.390727   47640 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-817668" in "kube-system" namespace to be "Ready" ...
	I1216 20:40:38.900385   47640 pod_ready.go:93] pod "kube-apiserver-test-preload-817668" in "kube-system" namespace has status "Ready":"True"
	I1216 20:40:38.900419   47640 pod_ready.go:82] duration metric: took 509.685991ms for pod "kube-apiserver-test-preload-817668" in "kube-system" namespace to be "Ready" ...
	I1216 20:40:38.900430   47640 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-817668" in "kube-system" namespace to be "Ready" ...
	I1216 20:40:39.907914   47640 pod_ready.go:93] pod "kube-controller-manager-test-preload-817668" in "kube-system" namespace has status "Ready":"True"
	I1216 20:40:39.907953   47640 pod_ready.go:82] duration metric: took 1.007515921s for pod "kube-controller-manager-test-preload-817668" in "kube-system" namespace to be "Ready" ...
	I1216 20:40:39.907968   47640 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mc7d2" in "kube-system" namespace to be "Ready" ...
	I1216 20:40:39.919568   47640 pod_ready.go:93] pod "kube-proxy-mc7d2" in "kube-system" namespace has status "Ready":"True"
	I1216 20:40:39.919593   47640 pod_ready.go:82] duration metric: took 11.615712ms for pod "kube-proxy-mc7d2" in "kube-system" namespace to be "Ready" ...
	I1216 20:40:39.919603   47640 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-817668" in "kube-system" namespace to be "Ready" ...
	I1216 20:40:40.272077   47640 pod_ready.go:93] pod "kube-scheduler-test-preload-817668" in "kube-system" namespace has status "Ready":"True"
	I1216 20:40:40.272111   47640 pod_ready.go:82] duration metric: took 352.500365ms for pod "kube-scheduler-test-preload-817668" in "kube-system" namespace to be "Ready" ...
	I1216 20:40:40.272126   47640 pod_ready.go:39] duration metric: took 2.399925937s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 20:40:40.272143   47640 api_server.go:52] waiting for apiserver process to appear ...
	I1216 20:40:40.272204   47640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:40:40.288535   47640 api_server.go:72] duration metric: took 10.632694248s to wait for apiserver process to appear ...
	I1216 20:40:40.288564   47640 api_server.go:88] waiting for apiserver healthz status ...
	I1216 20:40:40.288584   47640 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I1216 20:40:40.296497   47640 api_server.go:279] https://192.168.39.211:8443/healthz returned 200:
	ok
	I1216 20:40:40.297971   47640 api_server.go:141] control plane version: v1.24.4
	I1216 20:40:40.297999   47640 api_server.go:131] duration metric: took 9.426878ms to wait for apiserver health ...
	I1216 20:40:40.298009   47640 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 20:40:40.474143   47640 system_pods.go:59] 7 kube-system pods found
	I1216 20:40:40.474178   47640 system_pods.go:61] "coredns-6d4b75cb6d-9ngrd" [71ec151f-cacf-44bb-8665-448403f69b44] Running
	I1216 20:40:40.474195   47640 system_pods.go:61] "etcd-test-preload-817668" [0b12ccd3-e72c-4a10-ba17-903effd01629] Running
	I1216 20:40:40.474201   47640 system_pods.go:61] "kube-apiserver-test-preload-817668" [2a2060d9-ba5f-4138-9bc2-55fe4d6c62bb] Running
	I1216 20:40:40.474207   47640 system_pods.go:61] "kube-controller-manager-test-preload-817668" [4736a316-caf2-4b8c-acf2-87c686cd9b42] Running
	I1216 20:40:40.474212   47640 system_pods.go:61] "kube-proxy-mc7d2" [56a47da7-ace5-4721-915a-1139bd993681] Running
	I1216 20:40:40.474216   47640 system_pods.go:61] "kube-scheduler-test-preload-817668" [3f285193-67d8-4843-ae56-6af026426e2f] Running
	I1216 20:40:40.474221   47640 system_pods.go:61] "storage-provisioner" [df2461a8-d94a-41ce-af4e-88104696317f] Running
	I1216 20:40:40.474229   47640 system_pods.go:74] duration metric: took 176.212903ms to wait for pod list to return data ...
	I1216 20:40:40.474238   47640 default_sa.go:34] waiting for default service account to be created ...
	I1216 20:40:40.671131   47640 default_sa.go:45] found service account: "default"
	I1216 20:40:40.671167   47640 default_sa.go:55] duration metric: took 196.920471ms for default service account to be created ...
	I1216 20:40:40.671180   47640 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 20:40:40.874501   47640 system_pods.go:86] 7 kube-system pods found
	I1216 20:40:40.874531   47640 system_pods.go:89] "coredns-6d4b75cb6d-9ngrd" [71ec151f-cacf-44bb-8665-448403f69b44] Running
	I1216 20:40:40.874537   47640 system_pods.go:89] "etcd-test-preload-817668" [0b12ccd3-e72c-4a10-ba17-903effd01629] Running
	I1216 20:40:40.874541   47640 system_pods.go:89] "kube-apiserver-test-preload-817668" [2a2060d9-ba5f-4138-9bc2-55fe4d6c62bb] Running
	I1216 20:40:40.874545   47640 system_pods.go:89] "kube-controller-manager-test-preload-817668" [4736a316-caf2-4b8c-acf2-87c686cd9b42] Running
	I1216 20:40:40.874549   47640 system_pods.go:89] "kube-proxy-mc7d2" [56a47da7-ace5-4721-915a-1139bd993681] Running
	I1216 20:40:40.874552   47640 system_pods.go:89] "kube-scheduler-test-preload-817668" [3f285193-67d8-4843-ae56-6af026426e2f] Running
	I1216 20:40:40.874555   47640 system_pods.go:89] "storage-provisioner" [df2461a8-d94a-41ce-af4e-88104696317f] Running
	I1216 20:40:40.874562   47640 system_pods.go:126] duration metric: took 203.374322ms to wait for k8s-apps to be running ...
	I1216 20:40:40.874568   47640 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 20:40:40.874614   47640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 20:40:40.894832   47640 system_svc.go:56] duration metric: took 20.254941ms WaitForService to wait for kubelet
	I1216 20:40:40.894867   47640 kubeadm.go:582] duration metric: took 11.239032184s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 20:40:40.894883   47640 node_conditions.go:102] verifying NodePressure condition ...
	I1216 20:40:41.072612   47640 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 20:40:41.072639   47640 node_conditions.go:123] node cpu capacity is 2
	I1216 20:40:41.072653   47640 node_conditions.go:105] duration metric: took 177.765746ms to run NodePressure ...
	I1216 20:40:41.072664   47640 start.go:241] waiting for startup goroutines ...
	I1216 20:40:41.072671   47640 start.go:246] waiting for cluster config update ...
	I1216 20:40:41.072680   47640 start.go:255] writing updated cluster config ...
	I1216 20:40:41.072920   47640 ssh_runner.go:195] Run: rm -f paused
	I1216 20:40:41.120479   47640 start.go:600] kubectl: 1.32.0, cluster: 1.24.4 (minor skew: 8)
	I1216 20:40:41.122536   47640 out.go:201] 
	W1216 20:40:41.124355   47640 out.go:270] ! /usr/local/bin/kubectl is version 1.32.0, which may have incompatibilities with Kubernetes 1.24.4.
	I1216 20:40:41.126092   47640 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I1216 20:40:41.127519   47640 out.go:177] * Done! kubectl is now configured to use "test-preload-817668" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 16 20:40:42 test-preload-817668 crio[671]: time="2024-12-16 20:40:42.040906577Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734381642040884126,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5f8a6060-621b-49e8-b983-9d6c6199c541 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 20:40:42 test-preload-817668 crio[671]: time="2024-12-16 20:40:42.041617609Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d3fee8ac-cb0e-415f-9f4b-6093757b92ad name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 20:40:42 test-preload-817668 crio[671]: time="2024-12-16 20:40:42.041689374Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d3fee8ac-cb0e-415f-9f4b-6093757b92ad name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 20:40:42 test-preload-817668 crio[671]: time="2024-12-16 20:40:42.041870837Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a3a2ef021ba0eccb94376ea9f3d4e8633576f3ca70b4751aa226e779403c46b7,PodSandboxId:26c2d7029315b08a5fd714cf3cd206dc5bcaf9804601ff7c031a14cdcc0f671f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1734381636132239647,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-9ngrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71ec151f-cacf-44bb-8665-448403f69b44,},Annotations:map[string]string{io.kubernetes.container.hash: 24daeb4f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be28863fbf74dca62cebf717970aeeec89066b0aec510b685aa3bf3f5698fe60,PodSandboxId:2f0126eb318dc748672e636733ac633ac3b42b2f5f934499cde6d85b7a0a4118,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1734381628847685657,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mc7d2,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 56a47da7-ace5-4721-915a-1139bd993681,},Annotations:map[string]string{io.kubernetes.container.hash: 9d27f31c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33981d6726e9a1b30ff94bd164f0acc16eef680b65857cadc36da51f6630095a,PodSandboxId:0308d3b9c4ab5dc484ae32ddf6d1566503ade244ebea496855b8b2aad1593cee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734381628846372841,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df
2461a8-d94a-41ce-af4e-88104696317f,},Annotations:map[string]string{io.kubernetes.container.hash: 989ac148,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0cac632629175f2f3a70b32e44342b8579cd82650d1e5453c957fdd0f84702c,PodSandboxId:9591f9594ffba11fa5b1a71ff3b70bd97c49b00267f6da826bc410b724d191c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1734381622601311471,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-817668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30b406844
3eaa357e85060eeb9b971b2,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc49081733f0e374e8796b86b0d6a30d31ce979e6c363010a1da8ab06d6ce63f,PodSandboxId:bd557e4ed606e5b3ca80842d62e36a48c15a3e7bb20c674df2810c002a18dcb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1734381622585851720,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-817668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1736d682af1135ee728b
7aedd018a9c6,},Annotations:map[string]string{io.kubernetes.container.hash: d7f23f3b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa5ea551d0f39b38ad5459fdce183358f04f4e78aaf55aa0dc435a5ae8a3fa61,PodSandboxId:0e4344d6904695c0db6851d697c0b87accfb18ebfdb7e9b6b6f77c5d25716161,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1734381622526490604,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-817668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f87c
99eb8e558a82fba37e3dc444d4dc,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71c075d316e175bc28c0c26b4babc084f99c6c6b74d5b09bd0295013f34aa323,PodSandboxId:b951ac20ba2e47cd9f810b88c31ce369d6bfbb878cbcc875d1f4e0e5597ea55a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1734381622558118081,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-817668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15da9641747f756617a31213c9768de1,},Annotation
s:map[string]string{io.kubernetes.container.hash: 544e6637,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d3fee8ac-cb0e-415f-9f4b-6093757b92ad name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 20:40:42 test-preload-817668 crio[671]: time="2024-12-16 20:40:42.082385366Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f741531e-553e-4442-9439-7d137d0c6651 name=/runtime.v1.RuntimeService/Version
	Dec 16 20:40:42 test-preload-817668 crio[671]: time="2024-12-16 20:40:42.082476683Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f741531e-553e-4442-9439-7d137d0c6651 name=/runtime.v1.RuntimeService/Version
	Dec 16 20:40:42 test-preload-817668 crio[671]: time="2024-12-16 20:40:42.084148073Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dfb8f8ec-4cc0-4236-8724-28799d1dd4be name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 20:40:42 test-preload-817668 crio[671]: time="2024-12-16 20:40:42.084591426Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734381642084571458,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dfb8f8ec-4cc0-4236-8724-28799d1dd4be name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 20:40:42 test-preload-817668 crio[671]: time="2024-12-16 20:40:42.085582101Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cae7cb29-66da-495c-920c-8863efdb5db9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 20:40:42 test-preload-817668 crio[671]: time="2024-12-16 20:40:42.085655109Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cae7cb29-66da-495c-920c-8863efdb5db9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 20:40:42 test-preload-817668 crio[671]: time="2024-12-16 20:40:42.085889938Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a3a2ef021ba0eccb94376ea9f3d4e8633576f3ca70b4751aa226e779403c46b7,PodSandboxId:26c2d7029315b08a5fd714cf3cd206dc5bcaf9804601ff7c031a14cdcc0f671f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1734381636132239647,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-9ngrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71ec151f-cacf-44bb-8665-448403f69b44,},Annotations:map[string]string{io.kubernetes.container.hash: 24daeb4f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be28863fbf74dca62cebf717970aeeec89066b0aec510b685aa3bf3f5698fe60,PodSandboxId:2f0126eb318dc748672e636733ac633ac3b42b2f5f934499cde6d85b7a0a4118,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1734381628847685657,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mc7d2,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 56a47da7-ace5-4721-915a-1139bd993681,},Annotations:map[string]string{io.kubernetes.container.hash: 9d27f31c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33981d6726e9a1b30ff94bd164f0acc16eef680b65857cadc36da51f6630095a,PodSandboxId:0308d3b9c4ab5dc484ae32ddf6d1566503ade244ebea496855b8b2aad1593cee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734381628846372841,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df
2461a8-d94a-41ce-af4e-88104696317f,},Annotations:map[string]string{io.kubernetes.container.hash: 989ac148,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0cac632629175f2f3a70b32e44342b8579cd82650d1e5453c957fdd0f84702c,PodSandboxId:9591f9594ffba11fa5b1a71ff3b70bd97c49b00267f6da826bc410b724d191c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1734381622601311471,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-817668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30b406844
3eaa357e85060eeb9b971b2,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc49081733f0e374e8796b86b0d6a30d31ce979e6c363010a1da8ab06d6ce63f,PodSandboxId:bd557e4ed606e5b3ca80842d62e36a48c15a3e7bb20c674df2810c002a18dcb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1734381622585851720,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-817668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1736d682af1135ee728b
7aedd018a9c6,},Annotations:map[string]string{io.kubernetes.container.hash: d7f23f3b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa5ea551d0f39b38ad5459fdce183358f04f4e78aaf55aa0dc435a5ae8a3fa61,PodSandboxId:0e4344d6904695c0db6851d697c0b87accfb18ebfdb7e9b6b6f77c5d25716161,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1734381622526490604,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-817668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f87c
99eb8e558a82fba37e3dc444d4dc,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71c075d316e175bc28c0c26b4babc084f99c6c6b74d5b09bd0295013f34aa323,PodSandboxId:b951ac20ba2e47cd9f810b88c31ce369d6bfbb878cbcc875d1f4e0e5597ea55a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1734381622558118081,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-817668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15da9641747f756617a31213c9768de1,},Annotation
s:map[string]string{io.kubernetes.container.hash: 544e6637,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cae7cb29-66da-495c-920c-8863efdb5db9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 20:40:42 test-preload-817668 crio[671]: time="2024-12-16 20:40:42.128478591Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4a8baf92-0ed7-413a-bf9b-225b8853871f name=/runtime.v1.RuntimeService/Version
	Dec 16 20:40:42 test-preload-817668 crio[671]: time="2024-12-16 20:40:42.128573010Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4a8baf92-0ed7-413a-bf9b-225b8853871f name=/runtime.v1.RuntimeService/Version
	Dec 16 20:40:42 test-preload-817668 crio[671]: time="2024-12-16 20:40:42.130210078Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2b6b2f40-3e37-48a4-98c9-43fa6a05a7d7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 20:40:42 test-preload-817668 crio[671]: time="2024-12-16 20:40:42.130686784Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734381642130659424,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2b6b2f40-3e37-48a4-98c9-43fa6a05a7d7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 20:40:42 test-preload-817668 crio[671]: time="2024-12-16 20:40:42.131395746Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4b1a3d9b-496b-4770-a2db-84f76f67e152 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 20:40:42 test-preload-817668 crio[671]: time="2024-12-16 20:40:42.131467233Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4b1a3d9b-496b-4770-a2db-84f76f67e152 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 20:40:42 test-preload-817668 crio[671]: time="2024-12-16 20:40:42.131662495Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a3a2ef021ba0eccb94376ea9f3d4e8633576f3ca70b4751aa226e779403c46b7,PodSandboxId:26c2d7029315b08a5fd714cf3cd206dc5bcaf9804601ff7c031a14cdcc0f671f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1734381636132239647,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-9ngrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71ec151f-cacf-44bb-8665-448403f69b44,},Annotations:map[string]string{io.kubernetes.container.hash: 24daeb4f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be28863fbf74dca62cebf717970aeeec89066b0aec510b685aa3bf3f5698fe60,PodSandboxId:2f0126eb318dc748672e636733ac633ac3b42b2f5f934499cde6d85b7a0a4118,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1734381628847685657,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mc7d2,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 56a47da7-ace5-4721-915a-1139bd993681,},Annotations:map[string]string{io.kubernetes.container.hash: 9d27f31c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33981d6726e9a1b30ff94bd164f0acc16eef680b65857cadc36da51f6630095a,PodSandboxId:0308d3b9c4ab5dc484ae32ddf6d1566503ade244ebea496855b8b2aad1593cee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734381628846372841,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df
2461a8-d94a-41ce-af4e-88104696317f,},Annotations:map[string]string{io.kubernetes.container.hash: 989ac148,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0cac632629175f2f3a70b32e44342b8579cd82650d1e5453c957fdd0f84702c,PodSandboxId:9591f9594ffba11fa5b1a71ff3b70bd97c49b00267f6da826bc410b724d191c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1734381622601311471,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-817668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30b406844
3eaa357e85060eeb9b971b2,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc49081733f0e374e8796b86b0d6a30d31ce979e6c363010a1da8ab06d6ce63f,PodSandboxId:bd557e4ed606e5b3ca80842d62e36a48c15a3e7bb20c674df2810c002a18dcb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1734381622585851720,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-817668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1736d682af1135ee728b
7aedd018a9c6,},Annotations:map[string]string{io.kubernetes.container.hash: d7f23f3b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa5ea551d0f39b38ad5459fdce183358f04f4e78aaf55aa0dc435a5ae8a3fa61,PodSandboxId:0e4344d6904695c0db6851d697c0b87accfb18ebfdb7e9b6b6f77c5d25716161,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1734381622526490604,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-817668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f87c
99eb8e558a82fba37e3dc444d4dc,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71c075d316e175bc28c0c26b4babc084f99c6c6b74d5b09bd0295013f34aa323,PodSandboxId:b951ac20ba2e47cd9f810b88c31ce369d6bfbb878cbcc875d1f4e0e5597ea55a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1734381622558118081,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-817668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15da9641747f756617a31213c9768de1,},Annotation
s:map[string]string{io.kubernetes.container.hash: 544e6637,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4b1a3d9b-496b-4770-a2db-84f76f67e152 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 20:40:42 test-preload-817668 crio[671]: time="2024-12-16 20:40:42.168420725Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=065ffa78-b536-4293-b034-e88ad5f2d0f0 name=/runtime.v1.RuntimeService/Version
	Dec 16 20:40:42 test-preload-817668 crio[671]: time="2024-12-16 20:40:42.168519926Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=065ffa78-b536-4293-b034-e88ad5f2d0f0 name=/runtime.v1.RuntimeService/Version
	Dec 16 20:40:42 test-preload-817668 crio[671]: time="2024-12-16 20:40:42.170305594Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=baaf9eb4-1862-4c82-b652-95f58cc5e8f1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 20:40:42 test-preload-817668 crio[671]: time="2024-12-16 20:40:42.170920053Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734381642170894755,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=baaf9eb4-1862-4c82-b652-95f58cc5e8f1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 20:40:42 test-preload-817668 crio[671]: time="2024-12-16 20:40:42.171523854Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=42a7da3a-7ee2-4333-889c-4c3c6bd10154 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 20:40:42 test-preload-817668 crio[671]: time="2024-12-16 20:40:42.171597073Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=42a7da3a-7ee2-4333-889c-4c3c6bd10154 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 20:40:42 test-preload-817668 crio[671]: time="2024-12-16 20:40:42.171822796Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a3a2ef021ba0eccb94376ea9f3d4e8633576f3ca70b4751aa226e779403c46b7,PodSandboxId:26c2d7029315b08a5fd714cf3cd206dc5bcaf9804601ff7c031a14cdcc0f671f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1734381636132239647,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-9ngrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71ec151f-cacf-44bb-8665-448403f69b44,},Annotations:map[string]string{io.kubernetes.container.hash: 24daeb4f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be28863fbf74dca62cebf717970aeeec89066b0aec510b685aa3bf3f5698fe60,PodSandboxId:2f0126eb318dc748672e636733ac633ac3b42b2f5f934499cde6d85b7a0a4118,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1734381628847685657,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mc7d2,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 56a47da7-ace5-4721-915a-1139bd993681,},Annotations:map[string]string{io.kubernetes.container.hash: 9d27f31c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33981d6726e9a1b30ff94bd164f0acc16eef680b65857cadc36da51f6630095a,PodSandboxId:0308d3b9c4ab5dc484ae32ddf6d1566503ade244ebea496855b8b2aad1593cee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734381628846372841,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df
2461a8-d94a-41ce-af4e-88104696317f,},Annotations:map[string]string{io.kubernetes.container.hash: 989ac148,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0cac632629175f2f3a70b32e44342b8579cd82650d1e5453c957fdd0f84702c,PodSandboxId:9591f9594ffba11fa5b1a71ff3b70bd97c49b00267f6da826bc410b724d191c6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1734381622601311471,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-817668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30b406844
3eaa357e85060eeb9b971b2,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc49081733f0e374e8796b86b0d6a30d31ce979e6c363010a1da8ab06d6ce63f,PodSandboxId:bd557e4ed606e5b3ca80842d62e36a48c15a3e7bb20c674df2810c002a18dcb6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1734381622585851720,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-817668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1736d682af1135ee728b
7aedd018a9c6,},Annotations:map[string]string{io.kubernetes.container.hash: d7f23f3b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa5ea551d0f39b38ad5459fdce183358f04f4e78aaf55aa0dc435a5ae8a3fa61,PodSandboxId:0e4344d6904695c0db6851d697c0b87accfb18ebfdb7e9b6b6f77c5d25716161,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1734381622526490604,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-817668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f87c
99eb8e558a82fba37e3dc444d4dc,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71c075d316e175bc28c0c26b4babc084f99c6c6b74d5b09bd0295013f34aa323,PodSandboxId:b951ac20ba2e47cd9f810b88c31ce369d6bfbb878cbcc875d1f4e0e5597ea55a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1734381622558118081,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-817668,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15da9641747f756617a31213c9768de1,},Annotation
s:map[string]string{io.kubernetes.container.hash: 544e6637,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=42a7da3a-7ee2-4333-889c-4c3c6bd10154 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a3a2ef021ba0e       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   6 seconds ago       Running             coredns                   1                   26c2d7029315b       coredns-6d4b75cb6d-9ngrd
	be28863fbf74d       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   13 seconds ago      Running             kube-proxy                1                   2f0126eb318dc       kube-proxy-mc7d2
	33981d6726e9a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 seconds ago      Running             storage-provisioner       1                   0308d3b9c4ab5       storage-provisioner
	c0cac63262917       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   19 seconds ago      Running             kube-scheduler            1                   9591f9594ffba       kube-scheduler-test-preload-817668
	fc49081733f0e       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   19 seconds ago      Running             kube-apiserver            1                   bd557e4ed606e       kube-apiserver-test-preload-817668
	71c075d316e17       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   19 seconds ago      Running             etcd                      1                   b951ac20ba2e4       etcd-test-preload-817668
	fa5ea551d0f39       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   19 seconds ago      Running             kube-controller-manager   1                   0e4344d690469       kube-controller-manager-test-preload-817668
	
	
	==> coredns [a3a2ef021ba0eccb94376ea9f3d4e8633576f3ca70b4751aa226e779403c46b7] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:51171 - 9291 "HINFO IN 4350006553679916605.8328247122588051523. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.064688733s
	
	
	==> describe nodes <==
	Name:               test-preload-817668
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-817668
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=74e51ab701402ddc00f8ba70f2a2775c7dcd6477
	                    minikube.k8s.io/name=test-preload-817668
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_16T20_37_15_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Dec 2024 20:37:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-817668
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Dec 2024 20:40:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Dec 2024 20:40:37 +0000   Mon, 16 Dec 2024 20:37:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Dec 2024 20:40:37 +0000   Mon, 16 Dec 2024 20:37:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Dec 2024 20:40:37 +0000   Mon, 16 Dec 2024 20:37:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Dec 2024 20:40:37 +0000   Mon, 16 Dec 2024 20:40:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.211
	  Hostname:    test-preload-817668
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0589e244fc6c4e79a01665785445db58
	  System UUID:                0589e244-fc6c-4e79-a016-65785445db58
	  Boot ID:                    7cce3567-b7ae-4c11-8100-a3eea2a3f3bd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-9ngrd                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     3m14s
	  kube-system                 etcd-test-preload-817668                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         3m27s
	  kube-system                 kube-apiserver-test-preload-817668             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m27s
	  kube-system                 kube-controller-manager-test-preload-817668    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m29s
	  kube-system                 kube-proxy-mc7d2                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	  kube-system                 kube-scheduler-test-preload-817668             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m26s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13s                    kube-proxy       
	  Normal  Starting                 3m11s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m35s (x5 over 3m35s)  kubelet          Node test-preload-817668 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m35s (x5 over 3m35s)  kubelet          Node test-preload-817668 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m35s (x4 over 3m35s)  kubelet          Node test-preload-817668 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m27s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m27s                  kubelet          Node test-preload-817668 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m27s                  kubelet          Node test-preload-817668 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m27s                  kubelet          Node test-preload-817668 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3m16s                  kubelet          Node test-preload-817668 status is now: NodeReady
	  Normal  RegisteredNode           3m15s                  node-controller  Node test-preload-817668 event: Registered Node test-preload-817668 in Controller
	  Normal  Starting                 21s                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20s (x8 over 21s)      kubelet          Node test-preload-817668 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x8 over 21s)      kubelet          Node test-preload-817668 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x7 over 21s)      kubelet          Node test-preload-817668 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3s                     node-controller  Node test-preload-817668 event: Registered Node test-preload-817668 in Controller
	
	
	==> dmesg <==
	[Dec16 20:39] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053208] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041911] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.969397] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.933356] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.628674] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec16 20:40] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.059321] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068191] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.181765] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.144270] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.284392] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[ +13.070259] systemd-fstab-generator[989]: Ignoring "noauto" option for root device
	[  +0.057799] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.892873] systemd-fstab-generator[1119]: Ignoring "noauto" option for root device
	[  +7.166126] kauditd_printk_skb: 105 callbacks suppressed
	[  +0.973453] systemd-fstab-generator[1764]: Ignoring "noauto" option for root device
	[  +6.116908] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [71c075d316e175bc28c0c26b4babc084f99c6c6b74d5b09bd0295013f34aa323] <==
	{"level":"info","ts":"2024-12-16T20:40:22.972Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"d3f1da2044f49cdd","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-12-16T20:40:22.974Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-16T20:40:22.974Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-12-16T20:40:22.975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3f1da2044f49cdd switched to configuration voters=(15272227643520752861)"}
	{"level":"info","ts":"2024-12-16T20:40:22.976Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a3f4522b5c780b58","local-member-id":"d3f1da2044f49cdd","added-peer-id":"d3f1da2044f49cdd","added-peer-peer-urls":["https://192.168.39.211:2380"]}
	{"level":"info","ts":"2024-12-16T20:40:22.977Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a3f4522b5c780b58","local-member-id":"d3f1da2044f49cdd","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-16T20:40:22.977Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-16T20:40:22.975Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.211:2380"}
	{"level":"info","ts":"2024-12-16T20:40:22.982Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.211:2380"}
	{"level":"info","ts":"2024-12-16T20:40:22.982Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d3f1da2044f49cdd","initial-advertise-peer-urls":["https://192.168.39.211:2380"],"listen-peer-urls":["https://192.168.39.211:2380"],"advertise-client-urls":["https://192.168.39.211:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.211:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-16T20:40:22.982Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-16T20:40:24.529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3f1da2044f49cdd is starting a new election at term 2"}
	{"level":"info","ts":"2024-12-16T20:40:24.529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3f1da2044f49cdd became pre-candidate at term 2"}
	{"level":"info","ts":"2024-12-16T20:40:24.529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3f1da2044f49cdd received MsgPreVoteResp from d3f1da2044f49cdd at term 2"}
	{"level":"info","ts":"2024-12-16T20:40:24.529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3f1da2044f49cdd became candidate at term 3"}
	{"level":"info","ts":"2024-12-16T20:40:24.529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3f1da2044f49cdd received MsgVoteResp from d3f1da2044f49cdd at term 3"}
	{"level":"info","ts":"2024-12-16T20:40:24.529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d3f1da2044f49cdd became leader at term 3"}
	{"level":"info","ts":"2024-12-16T20:40:24.529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d3f1da2044f49cdd elected leader d3f1da2044f49cdd at term 3"}
	{"level":"info","ts":"2024-12-16T20:40:24.534Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"d3f1da2044f49cdd","local-member-attributes":"{Name:test-preload-817668 ClientURLs:[https://192.168.39.211:2379]}","request-path":"/0/members/d3f1da2044f49cdd/attributes","cluster-id":"a3f4522b5c780b58","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-16T20:40:24.535Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-16T20:40:24.535Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-16T20:40:24.538Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.211:2379"}
	{"level":"info","ts":"2024-12-16T20:40:24.538Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-16T20:40:24.539Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-16T20:40:24.539Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 20:40:42 up 0 min,  0 users,  load average: 1.32, 0.36, 0.12
	Linux test-preload-817668 5.10.207 #1 SMP Thu Dec 12 23:38:00 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [fc49081733f0e374e8796b86b0d6a30d31ce979e6c363010a1da8ab06d6ce63f] <==
	I1216 20:40:26.964417       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1216 20:40:26.964712       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1216 20:40:26.971726       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1216 20:40:26.971761       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I1216 20:40:27.014192       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1216 20:40:27.032658       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E1216 20:40:27.136855       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I1216 20:40:27.146860       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1216 20:40:27.148111       1 cache.go:39] Caches are synced for autoregister controller
	I1216 20:40:27.154160       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1216 20:40:27.154656       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1216 20:40:27.157527       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1216 20:40:27.172236       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1216 20:40:27.174861       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1216 20:40:27.195099       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 20:40:27.615957       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1216 20:40:27.951431       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1216 20:40:28.575642       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1216 20:40:28.589698       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1216 20:40:28.635534       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1216 20:40:28.654455       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 20:40:28.661378       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 20:40:29.159384       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I1216 20:40:39.772706       1 controller.go:611] quota admission added evaluator for: endpoints
	I1216 20:40:39.815395       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [fa5ea551d0f39b38ad5459fdce183358f04f4e78aaf55aa0dc435a5ae8a3fa61] <==
	I1216 20:40:39.693128       1 shared_informer.go:262] Caches are synced for PVC protection
	I1216 20:40:39.695354       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I1216 20:40:39.696480       1 shared_informer.go:262] Caches are synced for TTL after finished
	I1216 20:40:39.696556       1 shared_informer.go:262] Caches are synced for HPA
	I1216 20:40:39.697620       1 shared_informer.go:262] Caches are synced for job
	I1216 20:40:39.713010       1 shared_informer.go:262] Caches are synced for ReplicationController
	I1216 20:40:39.726554       1 shared_informer.go:262] Caches are synced for namespace
	I1216 20:40:39.730081       1 shared_informer.go:262] Caches are synced for daemon sets
	I1216 20:40:39.737013       1 shared_informer.go:262] Caches are synced for taint
	I1216 20:40:39.737198       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I1216 20:40:39.737764       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W1216 20:40:39.737959       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-817668. Assuming now as a timestamp.
	I1216 20:40:39.738107       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I1216 20:40:39.738460       1 event.go:294] "Event occurred" object="test-preload-817668" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-817668 event: Registered Node test-preload-817668 in Controller"
	I1216 20:40:39.758720       1 shared_informer.go:262] Caches are synced for endpoint
	I1216 20:40:39.802715       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1216 20:40:39.808813       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I1216 20:40:39.867444       1 shared_informer.go:262] Caches are synced for disruption
	I1216 20:40:39.867477       1 disruption.go:371] Sending events to api server.
	I1216 20:40:39.882049       1 shared_informer.go:262] Caches are synced for deployment
	I1216 20:40:39.919137       1 shared_informer.go:262] Caches are synced for resource quota
	I1216 20:40:39.943219       1 shared_informer.go:262] Caches are synced for resource quota
	I1216 20:40:40.344255       1 shared_informer.go:262] Caches are synced for garbage collector
	I1216 20:40:40.391805       1 shared_informer.go:262] Caches are synced for garbage collector
	I1216 20:40:40.391862       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [be28863fbf74dca62cebf717970aeeec89066b0aec510b685aa3bf3f5698fe60] <==
	I1216 20:40:29.113765       1 node.go:163] Successfully retrieved node IP: 192.168.39.211
	I1216 20:40:29.113853       1 server_others.go:138] "Detected node IP" address="192.168.39.211"
	I1216 20:40:29.113922       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1216 20:40:29.148338       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1216 20:40:29.148426       1 server_others.go:206] "Using iptables Proxier"
	I1216 20:40:29.149069       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1216 20:40:29.149864       1 server.go:661] "Version info" version="v1.24.4"
	I1216 20:40:29.150042       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 20:40:29.152076       1 config.go:317] "Starting service config controller"
	I1216 20:40:29.152508       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1216 20:40:29.152731       1 config.go:444] "Starting node config controller"
	I1216 20:40:29.152762       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1216 20:40:29.154180       1 config.go:226] "Starting endpoint slice config controller"
	I1216 20:40:29.154211       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1216 20:40:29.252733       1 shared_informer.go:262] Caches are synced for service config
	I1216 20:40:29.253158       1 shared_informer.go:262] Caches are synced for node config
	I1216 20:40:29.254356       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [c0cac632629175f2f3a70b32e44342b8579cd82650d1e5453c957fdd0f84702c] <==
	I1216 20:40:23.823674       1 serving.go:348] Generated self-signed cert in-memory
	W1216 20:40:27.051230       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1216 20:40:27.051540       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1216 20:40:27.051637       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1216 20:40:27.051660       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1216 20:40:27.119214       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I1216 20:40:27.119328       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 20:40:27.122191       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1216 20:40:27.127659       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 20:40:27.128554       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1216 20:40:27.134325       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1216 20:40:27.243510       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 16 20:40:27 test-preload-817668 kubelet[1126]: I1216 20:40:27.158073    1126 setters.go:532] "Node became not ready" node="test-preload-817668" condition={Type:Ready Status:False LastHeartbeatTime:2024-12-16 20:40:27.158018103 +0000 UTC m=+5.485342492 LastTransitionTime:2024-12-16 20:40:27.158018103 +0000 UTC m=+5.485342492 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?}
	Dec 16 20:40:27 test-preload-817668 kubelet[1126]: E1216 20:40:27.560112    1126 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-test-preload-817668\" already exists" pod="kube-system/etcd-test-preload-817668"
	Dec 16 20:40:27 test-preload-817668 kubelet[1126]: I1216 20:40:27.793853    1126 apiserver.go:52] "Watching apiserver"
	Dec 16 20:40:27 test-preload-817668 kubelet[1126]: I1216 20:40:27.797538    1126 topology_manager.go:200] "Topology Admit Handler"
	Dec 16 20:40:27 test-preload-817668 kubelet[1126]: I1216 20:40:27.797650    1126 topology_manager.go:200] "Topology Admit Handler"
	Dec 16 20:40:27 test-preload-817668 kubelet[1126]: I1216 20:40:27.797684    1126 topology_manager.go:200] "Topology Admit Handler"
	Dec 16 20:40:27 test-preload-817668 kubelet[1126]: E1216 20:40:27.798020    1126 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-9ngrd" podUID=71ec151f-cacf-44bb-8665-448403f69b44
	Dec 16 20:40:27 test-preload-817668 kubelet[1126]: I1216 20:40:27.879662    1126 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t6dj\" (UniqueName: \"kubernetes.io/projected/71ec151f-cacf-44bb-8665-448403f69b44-kube-api-access-2t6dj\") pod \"coredns-6d4b75cb6d-9ngrd\" (UID: \"71ec151f-cacf-44bb-8665-448403f69b44\") " pod="kube-system/coredns-6d4b75cb6d-9ngrd"
	Dec 16 20:40:27 test-preload-817668 kubelet[1126]: I1216 20:40:27.879744    1126 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/56a47da7-ace5-4721-915a-1139bd993681-kube-proxy\") pod \"kube-proxy-mc7d2\" (UID: \"56a47da7-ace5-4721-915a-1139bd993681\") " pod="kube-system/kube-proxy-mc7d2"
	Dec 16 20:40:27 test-preload-817668 kubelet[1126]: I1216 20:40:27.879768    1126 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zfrg\" (UniqueName: \"kubernetes.io/projected/df2461a8-d94a-41ce-af4e-88104696317f-kube-api-access-7zfrg\") pod \"storage-provisioner\" (UID: \"df2461a8-d94a-41ce-af4e-88104696317f\") " pod="kube-system/storage-provisioner"
	Dec 16 20:40:27 test-preload-817668 kubelet[1126]: I1216 20:40:27.879797    1126 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjcvq\" (UniqueName: \"kubernetes.io/projected/56a47da7-ace5-4721-915a-1139bd993681-kube-api-access-xjcvq\") pod \"kube-proxy-mc7d2\" (UID: \"56a47da7-ace5-4721-915a-1139bd993681\") " pod="kube-system/kube-proxy-mc7d2"
	Dec 16 20:40:27 test-preload-817668 kubelet[1126]: I1216 20:40:27.879820    1126 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71ec151f-cacf-44bb-8665-448403f69b44-config-volume\") pod \"coredns-6d4b75cb6d-9ngrd\" (UID: \"71ec151f-cacf-44bb-8665-448403f69b44\") " pod="kube-system/coredns-6d4b75cb6d-9ngrd"
	Dec 16 20:40:27 test-preload-817668 kubelet[1126]: I1216 20:40:27.879845    1126 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56a47da7-ace5-4721-915a-1139bd993681-lib-modules\") pod \"kube-proxy-mc7d2\" (UID: \"56a47da7-ace5-4721-915a-1139bd993681\") " pod="kube-system/kube-proxy-mc7d2"
	Dec 16 20:40:27 test-preload-817668 kubelet[1126]: I1216 20:40:27.879867    1126 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56a47da7-ace5-4721-915a-1139bd993681-xtables-lock\") pod \"kube-proxy-mc7d2\" (UID: \"56a47da7-ace5-4721-915a-1139bd993681\") " pod="kube-system/kube-proxy-mc7d2"
	Dec 16 20:40:27 test-preload-817668 kubelet[1126]: I1216 20:40:27.879884    1126 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/df2461a8-d94a-41ce-af4e-88104696317f-tmp\") pod \"storage-provisioner\" (UID: \"df2461a8-d94a-41ce-af4e-88104696317f\") " pod="kube-system/storage-provisioner"
	Dec 16 20:40:27 test-preload-817668 kubelet[1126]: I1216 20:40:27.879906    1126 reconciler.go:159] "Reconciler: start to sync state"
	Dec 16 20:40:27 test-preload-817668 kubelet[1126]: E1216 20:40:27.981098    1126 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 16 20:40:27 test-preload-817668 kubelet[1126]: E1216 20:40:27.981225    1126 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/71ec151f-cacf-44bb-8665-448403f69b44-config-volume podName:71ec151f-cacf-44bb-8665-448403f69b44 nodeName:}" failed. No retries permitted until 2024-12-16 20:40:28.481179001 +0000 UTC m=+6.808503389 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/71ec151f-cacf-44bb-8665-448403f69b44-config-volume") pod "coredns-6d4b75cb6d-9ngrd" (UID: "71ec151f-cacf-44bb-8665-448403f69b44") : object "kube-system"/"coredns" not registered
	Dec 16 20:40:28 test-preload-817668 kubelet[1126]: E1216 20:40:28.485644    1126 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 16 20:40:28 test-preload-817668 kubelet[1126]: E1216 20:40:28.485739    1126 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/71ec151f-cacf-44bb-8665-448403f69b44-config-volume podName:71ec151f-cacf-44bb-8665-448403f69b44 nodeName:}" failed. No retries permitted until 2024-12-16 20:40:29.485723406 +0000 UTC m=+7.813047794 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/71ec151f-cacf-44bb-8665-448403f69b44-config-volume") pod "coredns-6d4b75cb6d-9ngrd" (UID: "71ec151f-cacf-44bb-8665-448403f69b44") : object "kube-system"/"coredns" not registered
	Dec 16 20:40:29 test-preload-817668 kubelet[1126]: E1216 20:40:29.494200    1126 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 16 20:40:29 test-preload-817668 kubelet[1126]: E1216 20:40:29.494309    1126 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/71ec151f-cacf-44bb-8665-448403f69b44-config-volume podName:71ec151f-cacf-44bb-8665-448403f69b44 nodeName:}" failed. No retries permitted until 2024-12-16 20:40:31.494292952 +0000 UTC m=+9.821617342 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/71ec151f-cacf-44bb-8665-448403f69b44-config-volume") pod "coredns-6d4b75cb6d-9ngrd" (UID: "71ec151f-cacf-44bb-8665-448403f69b44") : object "kube-system"/"coredns" not registered
	Dec 16 20:40:29 test-preload-817668 kubelet[1126]: E1216 20:40:29.925385    1126 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-9ngrd" podUID=71ec151f-cacf-44bb-8665-448403f69b44
	Dec 16 20:40:31 test-preload-817668 kubelet[1126]: E1216 20:40:31.516678    1126 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 16 20:40:31 test-preload-817668 kubelet[1126]: E1216 20:40:31.516889    1126 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/71ec151f-cacf-44bb-8665-448403f69b44-config-volume podName:71ec151f-cacf-44bb-8665-448403f69b44 nodeName:}" failed. No retries permitted until 2024-12-16 20:40:35.516814445 +0000 UTC m=+13.844138838 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/71ec151f-cacf-44bb-8665-448403f69b44-config-volume") pod "coredns-6d4b75cb6d-9ngrd" (UID: "71ec151f-cacf-44bb-8665-448403f69b44") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [33981d6726e9a1b30ff94bd164f0acc16eef680b65857cadc36da51f6630095a] <==
	I1216 20:40:28.940496       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-817668 -n test-preload-817668
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-817668 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-817668" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-817668
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-817668: (1.159150988s)
--- FAIL: TestPreload (285.31s)

                                                
                                    
x
+
TestKubernetesUpgrade (455.6s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-560677 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-560677 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m57.375136817s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-560677] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20091
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20091-7083/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-7083/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-560677" primary control-plane node in "kubernetes-upgrade-560677" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 20:42:38.594520   49163 out.go:345] Setting OutFile to fd 1 ...
	I1216 20:42:38.594869   49163 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:42:38.594885   49163 out.go:358] Setting ErrFile to fd 2...
	I1216 20:42:38.594892   49163 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:42:38.595163   49163 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-7083/.minikube/bin
	I1216 20:42:38.595913   49163 out.go:352] Setting JSON to false
	I1216 20:42:38.597086   49163 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5104,"bootTime":1734376655,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 20:42:38.597168   49163 start.go:139] virtualization: kvm guest
	I1216 20:42:38.598781   49163 out.go:177] * [kubernetes-upgrade-560677] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1216 20:42:38.600463   49163 notify.go:220] Checking for updates...
	I1216 20:42:38.601568   49163 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 20:42:38.604222   49163 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 20:42:38.606601   49163 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 20:42:38.608052   49163 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-7083/.minikube
	I1216 20:42:38.609920   49163 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 20:42:38.611500   49163 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 20:42:38.613170   49163 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 20:42:38.651286   49163 out.go:177] * Using the kvm2 driver based on user configuration
	I1216 20:42:38.652920   49163 start.go:297] selected driver: kvm2
	I1216 20:42:38.652950   49163 start.go:901] validating driver "kvm2" against <nil>
	I1216 20:42:38.652963   49163 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 20:42:38.653977   49163 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 20:42:38.670079   49163 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20091-7083/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1216 20:42:38.691394   49163 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1216 20:42:38.691454   49163 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 20:42:38.691776   49163 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 20:42:38.691804   49163 cni.go:84] Creating CNI manager for ""
	I1216 20:42:38.691862   49163 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 20:42:38.691871   49163 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 20:42:38.691946   49163 start.go:340] cluster config:
	{Name:kubernetes-upgrade-560677 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-560677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 20:42:38.692104   49163 iso.go:125] acquiring lock: {Name:mk60ed2ba7ed00047edacd09f4f6bf84214f0831 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 20:42:38.693995   49163 out.go:177] * Starting "kubernetes-upgrade-560677" primary control-plane node in "kubernetes-upgrade-560677" cluster
	I1216 20:42:38.695299   49163 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1216 20:42:38.695340   49163 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1216 20:42:38.695351   49163 cache.go:56] Caching tarball of preloaded images
	I1216 20:42:38.695446   49163 preload.go:172] Found /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 20:42:38.695459   49163 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1216 20:42:38.695903   49163 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/kubernetes-upgrade-560677/config.json ...
	I1216 20:42:38.695936   49163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/kubernetes-upgrade-560677/config.json: {Name:mkf3b5e860cc3898c84b9aa24e0018b66bcb8847 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:42:38.696111   49163 start.go:360] acquireMachinesLock for kubernetes-upgrade-560677: {Name:mk014ce1133f8d018fee1f78c9c31a354da6dd77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 20:43:03.004971   49163 start.go:364] duration metric: took 24.308817788s to acquireMachinesLock for "kubernetes-upgrade-560677"
	I1216 20:43:03.005041   49163 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-560677 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-560677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 20:43:03.005143   49163 start.go:125] createHost starting for "" (driver="kvm2")
	I1216 20:43:03.007947   49163 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1216 20:43:03.008144   49163 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 20:43:03.008201   49163 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:43:03.025407   49163 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45871
	I1216 20:43:03.025897   49163 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:43:03.026613   49163 main.go:141] libmachine: Using API Version  1
	I1216 20:43:03.026633   49163 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:43:03.027065   49163 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:43:03.027270   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetMachineName
	I1216 20:43:03.027429   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .DriverName
	I1216 20:43:03.027601   49163 start.go:159] libmachine.API.Create for "kubernetes-upgrade-560677" (driver="kvm2")
	I1216 20:43:03.027634   49163 client.go:168] LocalClient.Create starting
	I1216 20:43:03.027668   49163 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem
	I1216 20:43:03.027707   49163 main.go:141] libmachine: Decoding PEM data...
	I1216 20:43:03.027754   49163 main.go:141] libmachine: Parsing certificate...
	I1216 20:43:03.027822   49163 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem
	I1216 20:43:03.027857   49163 main.go:141] libmachine: Decoding PEM data...
	I1216 20:43:03.027874   49163 main.go:141] libmachine: Parsing certificate...
	I1216 20:43:03.027904   49163 main.go:141] libmachine: Running pre-create checks...
	I1216 20:43:03.027918   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .PreCreateCheck
	I1216 20:43:03.028310   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetConfigRaw
	I1216 20:43:03.028715   49163 main.go:141] libmachine: Creating machine...
	I1216 20:43:03.028731   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .Create
	I1216 20:43:03.028883   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) creating KVM machine...
	I1216 20:43:03.028906   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) creating network...
	I1216 20:43:03.030176   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | found existing default KVM network
	I1216 20:43:03.031234   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | I1216 20:43:03.031070   49494 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:79:29:19} reservation:<nil>}
	I1216 20:43:03.032052   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | I1216 20:43:03.031984   49494 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002e0030}
	I1216 20:43:03.032112   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | created network xml: 
	I1216 20:43:03.032147   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | <network>
	I1216 20:43:03.032159   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG |   <name>mk-kubernetes-upgrade-560677</name>
	I1216 20:43:03.032167   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG |   <dns enable='no'/>
	I1216 20:43:03.032175   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG |   
	I1216 20:43:03.032185   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1216 20:43:03.032197   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG |     <dhcp>
	I1216 20:43:03.032217   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1216 20:43:03.032236   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG |     </dhcp>
	I1216 20:43:03.032246   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG |   </ip>
	I1216 20:43:03.032253   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG |   
	I1216 20:43:03.032262   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | </network>
	I1216 20:43:03.032272   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | 
	I1216 20:43:03.037666   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | trying to create private KVM network mk-kubernetes-upgrade-560677 192.168.50.0/24...
	I1216 20:43:03.110644   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | private KVM network mk-kubernetes-upgrade-560677 192.168.50.0/24 created
	I1216 20:43:03.110680   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) setting up store path in /home/jenkins/minikube-integration/20091-7083/.minikube/machines/kubernetes-upgrade-560677 ...
	I1216 20:43:03.110696   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | I1216 20:43:03.110581   49494 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20091-7083/.minikube
	I1216 20:43:03.110708   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) building disk image from file:///home/jenkins/minikube-integration/20091-7083/.minikube/cache/iso/amd64/minikube-v1.34.0-1734029574-20090-amd64.iso
	I1216 20:43:03.110748   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Downloading /home/jenkins/minikube-integration/20091-7083/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20091-7083/.minikube/cache/iso/amd64/minikube-v1.34.0-1734029574-20090-amd64.iso...
	I1216 20:43:03.387580   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | I1216 20:43:03.387450   49494 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/kubernetes-upgrade-560677/id_rsa...
	I1216 20:43:03.435034   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | I1216 20:43:03.434883   49494 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/kubernetes-upgrade-560677/kubernetes-upgrade-560677.rawdisk...
	I1216 20:43:03.435110   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | Writing magic tar header
	I1216 20:43:03.435126   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) setting executable bit set on /home/jenkins/minikube-integration/20091-7083/.minikube/machines/kubernetes-upgrade-560677 (perms=drwx------)
	I1216 20:43:03.435143   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) setting executable bit set on /home/jenkins/minikube-integration/20091-7083/.minikube/machines (perms=drwxr-xr-x)
	I1216 20:43:03.435153   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) setting executable bit set on /home/jenkins/minikube-integration/20091-7083/.minikube (perms=drwxr-xr-x)
	I1216 20:43:03.435162   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | Writing SSH key tar header
	I1216 20:43:03.435188   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | I1216 20:43:03.435004   49494 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20091-7083/.minikube/machines/kubernetes-upgrade-560677 ...
	I1216 20:43:03.435203   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/kubernetes-upgrade-560677
	I1216 20:43:03.435215   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) setting executable bit set on /home/jenkins/minikube-integration/20091-7083 (perms=drwxrwxr-x)
	I1216 20:43:03.435261   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1216 20:43:03.435280   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1216 20:43:03.435295   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20091-7083/.minikube/machines
	I1216 20:43:03.435302   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) creating domain...
	I1216 20:43:03.435319   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20091-7083/.minikube
	I1216 20:43:03.435333   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20091-7083
	I1216 20:43:03.435347   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1216 20:43:03.435357   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | checking permissions on dir: /home/jenkins
	I1216 20:43:03.435371   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | checking permissions on dir: /home
	I1216 20:43:03.435383   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | skipping /home - not owner
	I1216 20:43:03.436479   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) define libvirt domain using xml: 
	I1216 20:43:03.436518   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) <domain type='kvm'>
	I1216 20:43:03.436529   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)   <name>kubernetes-upgrade-560677</name>
	I1216 20:43:03.436548   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)   <memory unit='MiB'>2200</memory>
	I1216 20:43:03.436560   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)   <vcpu>2</vcpu>
	I1216 20:43:03.436578   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)   <features>
	I1216 20:43:03.436589   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)     <acpi/>
	I1216 20:43:03.436600   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)     <apic/>
	I1216 20:43:03.436608   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)     <pae/>
	I1216 20:43:03.436621   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)     
	I1216 20:43:03.436656   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)   </features>
	I1216 20:43:03.436682   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)   <cpu mode='host-passthrough'>
	I1216 20:43:03.436712   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)   
	I1216 20:43:03.436724   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)   </cpu>
	I1216 20:43:03.436737   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)   <os>
	I1216 20:43:03.436749   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)     <type>hvm</type>
	I1216 20:43:03.436781   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)     <boot dev='cdrom'/>
	I1216 20:43:03.436802   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)     <boot dev='hd'/>
	I1216 20:43:03.436816   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)     <bootmenu enable='no'/>
	I1216 20:43:03.436827   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)   </os>
	I1216 20:43:03.436838   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)   <devices>
	I1216 20:43:03.436847   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)     <disk type='file' device='cdrom'>
	I1216 20:43:03.436871   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)       <source file='/home/jenkins/minikube-integration/20091-7083/.minikube/machines/kubernetes-upgrade-560677/boot2docker.iso'/>
	I1216 20:43:03.436891   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)       <target dev='hdc' bus='scsi'/>
	I1216 20:43:03.436911   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)       <readonly/>
	I1216 20:43:03.436922   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)     </disk>
	I1216 20:43:03.436935   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)     <disk type='file' device='disk'>
	I1216 20:43:03.436952   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1216 20:43:03.436976   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)       <source file='/home/jenkins/minikube-integration/20091-7083/.minikube/machines/kubernetes-upgrade-560677/kubernetes-upgrade-560677.rawdisk'/>
	I1216 20:43:03.436988   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)       <target dev='hda' bus='virtio'/>
	I1216 20:43:03.437029   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)     </disk>
	I1216 20:43:03.437046   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)     <interface type='network'>
	I1216 20:43:03.437059   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)       <source network='mk-kubernetes-upgrade-560677'/>
	I1216 20:43:03.437074   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)       <model type='virtio'/>
	I1216 20:43:03.437086   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)     </interface>
	I1216 20:43:03.437098   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)     <interface type='network'>
	I1216 20:43:03.437109   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)       <source network='default'/>
	I1216 20:43:03.437124   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)       <model type='virtio'/>
	I1216 20:43:03.437137   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)     </interface>
	I1216 20:43:03.437147   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)     <serial type='pty'>
	I1216 20:43:03.437161   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)       <target port='0'/>
	I1216 20:43:03.437171   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)     </serial>
	I1216 20:43:03.437181   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)     <console type='pty'>
	I1216 20:43:03.437197   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)       <target type='serial' port='0'/>
	I1216 20:43:03.437210   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)     </console>
	I1216 20:43:03.437221   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)     <rng model='virtio'>
	I1216 20:43:03.437235   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)       <backend model='random'>/dev/random</backend>
	I1216 20:43:03.437243   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)     </rng>
	I1216 20:43:03.437262   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)     
	I1216 20:43:03.437275   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)     
	I1216 20:43:03.437287   49163 main.go:141] libmachine: (kubernetes-upgrade-560677)   </devices>
	I1216 20:43:03.437295   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) </domain>
	I1216 20:43:03.437307   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) 
	I1216 20:43:03.444068   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined MAC address 52:54:00:3c:20:b8 in network default
	I1216 20:43:03.444679   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) starting domain...
	I1216 20:43:03.444701   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) ensuring networks are active...
	I1216 20:43:03.444713   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:03.445408   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Ensuring network default is active
	I1216 20:43:03.445759   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Ensuring network mk-kubernetes-upgrade-560677 is active
	I1216 20:43:03.446238   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) getting domain XML...
	I1216 20:43:03.446905   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) creating domain...
	I1216 20:43:04.784807   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) waiting for IP...
	I1216 20:43:04.785793   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:04.786210   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | unable to find current IP address of domain kubernetes-upgrade-560677 in network mk-kubernetes-upgrade-560677
	I1216 20:43:04.786315   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | I1216 20:43:04.786226   49494 retry.go:31] will retry after 267.187864ms: waiting for domain to come up
	I1216 20:43:05.054918   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:05.055442   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | unable to find current IP address of domain kubernetes-upgrade-560677 in network mk-kubernetes-upgrade-560677
	I1216 20:43:05.055503   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | I1216 20:43:05.055437   49494 retry.go:31] will retry after 389.695129ms: waiting for domain to come up
	I1216 20:43:05.447392   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:05.447969   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | unable to find current IP address of domain kubernetes-upgrade-560677 in network mk-kubernetes-upgrade-560677
	I1216 20:43:05.447995   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | I1216 20:43:05.447952   49494 retry.go:31] will retry after 356.536242ms: waiting for domain to come up
	I1216 20:43:05.806789   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:05.807400   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | unable to find current IP address of domain kubernetes-upgrade-560677 in network mk-kubernetes-upgrade-560677
	I1216 20:43:05.807424   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | I1216 20:43:05.807296   49494 retry.go:31] will retry after 469.155017ms: waiting for domain to come up
	I1216 20:43:06.278182   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:06.278685   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | unable to find current IP address of domain kubernetes-upgrade-560677 in network mk-kubernetes-upgrade-560677
	I1216 20:43:06.278717   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | I1216 20:43:06.278648   49494 retry.go:31] will retry after 751.020336ms: waiting for domain to come up
	I1216 20:43:07.031691   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:07.032194   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | unable to find current IP address of domain kubernetes-upgrade-560677 in network mk-kubernetes-upgrade-560677
	I1216 20:43:07.032346   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | I1216 20:43:07.032180   49494 retry.go:31] will retry after 814.253459ms: waiting for domain to come up
	I1216 20:43:07.848354   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:07.849066   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | unable to find current IP address of domain kubernetes-upgrade-560677 in network mk-kubernetes-upgrade-560677
	I1216 20:43:07.849101   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | I1216 20:43:07.848982   49494 retry.go:31] will retry after 812.695558ms: waiting for domain to come up
	I1216 20:43:08.663151   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:08.663611   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | unable to find current IP address of domain kubernetes-upgrade-560677 in network mk-kubernetes-upgrade-560677
	I1216 20:43:08.663645   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | I1216 20:43:08.663588   49494 retry.go:31] will retry after 1.246884969s: waiting for domain to come up
	I1216 20:43:09.911938   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:09.912429   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | unable to find current IP address of domain kubernetes-upgrade-560677 in network mk-kubernetes-upgrade-560677
	I1216 20:43:09.912460   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | I1216 20:43:09.912393   49494 retry.go:31] will retry after 1.199418597s: waiting for domain to come up
	I1216 20:43:11.112802   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:11.113241   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | unable to find current IP address of domain kubernetes-upgrade-560677 in network mk-kubernetes-upgrade-560677
	I1216 20:43:11.113292   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | I1216 20:43:11.113228   49494 retry.go:31] will retry after 2.068962961s: waiting for domain to come up
	I1216 20:43:13.184197   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:13.184607   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | unable to find current IP address of domain kubernetes-upgrade-560677 in network mk-kubernetes-upgrade-560677
	I1216 20:43:13.184666   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | I1216 20:43:13.184586   49494 retry.go:31] will retry after 2.798841399s: waiting for domain to come up
	I1216 20:43:15.985575   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:15.985997   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | unable to find current IP address of domain kubernetes-upgrade-560677 in network mk-kubernetes-upgrade-560677
	I1216 20:43:15.986031   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | I1216 20:43:15.985951   49494 retry.go:31] will retry after 2.416437792s: waiting for domain to come up
	I1216 20:43:18.403837   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:18.404405   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | unable to find current IP address of domain kubernetes-upgrade-560677 in network mk-kubernetes-upgrade-560677
	I1216 20:43:18.404434   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | I1216 20:43:18.404356   49494 retry.go:31] will retry after 4.481979931s: waiting for domain to come up
	I1216 20:43:22.891157   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:22.891616   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | unable to find current IP address of domain kubernetes-upgrade-560677 in network mk-kubernetes-upgrade-560677
	I1216 20:43:22.891639   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | I1216 20:43:22.891597   49494 retry.go:31] will retry after 4.159835123s: waiting for domain to come up
	I1216 20:43:27.056318   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:27.056826   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) found domain IP: 192.168.50.61
	I1216 20:43:27.056851   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has current primary IP address 192.168.50.61 and MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:27.056858   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) reserving static IP address...
	I1216 20:43:27.057408   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-560677", mac: "52:54:00:0a:f3:06", ip: "192.168.50.61"} in network mk-kubernetes-upgrade-560677
	I1216 20:43:27.140080   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) reserved static IP address 192.168.50.61 for domain kubernetes-upgrade-560677
	I1216 20:43:27.140112   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | Getting to WaitForSSH function...
	I1216 20:43:27.140124   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) waiting for SSH...
	I1216 20:43:27.142805   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:27.143159   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:f3:06", ip: ""} in network mk-kubernetes-upgrade-560677: {Iface:virbr2 ExpiryTime:2024-12-16 21:43:18 +0000 UTC Type:0 Mac:52:54:00:0a:f3:06 Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0a:f3:06}
	I1216 20:43:27.143192   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined IP address 192.168.50.61 and MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:27.143312   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | Using SSH client type: external
	I1216 20:43:27.143340   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | Using SSH private key: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/kubernetes-upgrade-560677/id_rsa (-rw-------)
	I1216 20:43:27.143387   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.61 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20091-7083/.minikube/machines/kubernetes-upgrade-560677/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1216 20:43:27.143430   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | About to run SSH command:
	I1216 20:43:27.143454   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | exit 0
	I1216 20:43:27.271570   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | SSH cmd err, output: <nil>: 
	I1216 20:43:27.271870   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) KVM machine creation complete
	I1216 20:43:27.272182   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetConfigRaw
	I1216 20:43:27.272872   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .DriverName
	I1216 20:43:27.273124   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .DriverName
	I1216 20:43:27.273326   49163 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1216 20:43:27.273344   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetState
	I1216 20:43:27.274689   49163 main.go:141] libmachine: Detecting operating system of created instance...
	I1216 20:43:27.274702   49163 main.go:141] libmachine: Waiting for SSH to be available...
	I1216 20:43:27.274718   49163 main.go:141] libmachine: Getting to WaitForSSH function...
	I1216 20:43:27.274724   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHHostname
	I1216 20:43:27.277035   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:27.277471   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:f3:06", ip: ""} in network mk-kubernetes-upgrade-560677: {Iface:virbr2 ExpiryTime:2024-12-16 21:43:18 +0000 UTC Type:0 Mac:52:54:00:0a:f3:06 Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:kubernetes-upgrade-560677 Clientid:01:52:54:00:0a:f3:06}
	I1216 20:43:27.277502   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined IP address 192.168.50.61 and MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:27.277634   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHPort
	I1216 20:43:27.277824   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHKeyPath
	I1216 20:43:27.277992   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHKeyPath
	I1216 20:43:27.278170   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHUsername
	I1216 20:43:27.278374   49163 main.go:141] libmachine: Using SSH client type: native
	I1216 20:43:27.278639   49163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.61 22 <nil> <nil>}
	I1216 20:43:27.278657   49163 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1216 20:43:27.386939   49163 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 20:43:27.386967   49163 main.go:141] libmachine: Detecting the provisioner...
	I1216 20:43:27.386978   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHHostname
	I1216 20:43:27.390106   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:27.390509   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:f3:06", ip: ""} in network mk-kubernetes-upgrade-560677: {Iface:virbr2 ExpiryTime:2024-12-16 21:43:18 +0000 UTC Type:0 Mac:52:54:00:0a:f3:06 Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:kubernetes-upgrade-560677 Clientid:01:52:54:00:0a:f3:06}
	I1216 20:43:27.390541   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined IP address 192.168.50.61 and MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:27.390716   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHPort
	I1216 20:43:27.390934   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHKeyPath
	I1216 20:43:27.391118   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHKeyPath
	I1216 20:43:27.391310   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHUsername
	I1216 20:43:27.391474   49163 main.go:141] libmachine: Using SSH client type: native
	I1216 20:43:27.391640   49163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.61 22 <nil> <nil>}
	I1216 20:43:27.391649   49163 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1216 20:43:27.500311   49163 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1216 20:43:27.500419   49163 main.go:141] libmachine: found compatible host: buildroot
	I1216 20:43:27.500432   49163 main.go:141] libmachine: Provisioning with buildroot...
	I1216 20:43:27.500450   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetMachineName
	I1216 20:43:27.500697   49163 buildroot.go:166] provisioning hostname "kubernetes-upgrade-560677"
	I1216 20:43:27.500725   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetMachineName
	I1216 20:43:27.500902   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHHostname
	I1216 20:43:27.503665   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:27.503970   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:f3:06", ip: ""} in network mk-kubernetes-upgrade-560677: {Iface:virbr2 ExpiryTime:2024-12-16 21:43:18 +0000 UTC Type:0 Mac:52:54:00:0a:f3:06 Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:kubernetes-upgrade-560677 Clientid:01:52:54:00:0a:f3:06}
	I1216 20:43:27.504010   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined IP address 192.168.50.61 and MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:27.504115   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHPort
	I1216 20:43:27.504345   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHKeyPath
	I1216 20:43:27.504503   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHKeyPath
	I1216 20:43:27.504642   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHUsername
	I1216 20:43:27.504783   49163 main.go:141] libmachine: Using SSH client type: native
	I1216 20:43:27.504959   49163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.61 22 <nil> <nil>}
	I1216 20:43:27.504973   49163 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-560677 && echo "kubernetes-upgrade-560677" | sudo tee /etc/hostname
	I1216 20:43:27.633697   49163 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-560677
	
	I1216 20:43:27.633730   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHHostname
	I1216 20:43:27.636833   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:27.637266   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:f3:06", ip: ""} in network mk-kubernetes-upgrade-560677: {Iface:virbr2 ExpiryTime:2024-12-16 21:43:18 +0000 UTC Type:0 Mac:52:54:00:0a:f3:06 Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:kubernetes-upgrade-560677 Clientid:01:52:54:00:0a:f3:06}
	I1216 20:43:27.637297   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined IP address 192.168.50.61 and MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:27.637467   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHPort
	I1216 20:43:27.637666   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHKeyPath
	I1216 20:43:27.637824   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHKeyPath
	I1216 20:43:27.637943   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHUsername
	I1216 20:43:27.638068   49163 main.go:141] libmachine: Using SSH client type: native
	I1216 20:43:27.638252   49163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.61 22 <nil> <nil>}
	I1216 20:43:27.638273   49163 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-560677' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-560677/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-560677' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 20:43:27.757215   49163 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 20:43:27.757259   49163 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20091-7083/.minikube CaCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20091-7083/.minikube}
	I1216 20:43:27.757318   49163 buildroot.go:174] setting up certificates
	I1216 20:43:27.757329   49163 provision.go:84] configureAuth start
	I1216 20:43:27.757343   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetMachineName
	I1216 20:43:27.757680   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetIP
	I1216 20:43:27.760522   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:27.760895   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:f3:06", ip: ""} in network mk-kubernetes-upgrade-560677: {Iface:virbr2 ExpiryTime:2024-12-16 21:43:18 +0000 UTC Type:0 Mac:52:54:00:0a:f3:06 Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:kubernetes-upgrade-560677 Clientid:01:52:54:00:0a:f3:06}
	I1216 20:43:27.760923   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined IP address 192.168.50.61 and MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:27.761163   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHHostname
	I1216 20:43:27.763353   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:27.763646   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:f3:06", ip: ""} in network mk-kubernetes-upgrade-560677: {Iface:virbr2 ExpiryTime:2024-12-16 21:43:18 +0000 UTC Type:0 Mac:52:54:00:0a:f3:06 Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:kubernetes-upgrade-560677 Clientid:01:52:54:00:0a:f3:06}
	I1216 20:43:27.763685   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined IP address 192.168.50.61 and MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:27.763794   49163 provision.go:143] copyHostCerts
	I1216 20:43:27.763855   49163 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem, removing ...
	I1216 20:43:27.763867   49163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem
	I1216 20:43:27.763936   49163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem (1082 bytes)
	I1216 20:43:27.764049   49163 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem, removing ...
	I1216 20:43:27.764061   49163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem
	I1216 20:43:27.764093   49163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem (1123 bytes)
	I1216 20:43:27.764188   49163 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem, removing ...
	I1216 20:43:27.764197   49163 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem
	I1216 20:43:27.764229   49163 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem (1679 bytes)
	I1216 20:43:27.764304   49163 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-560677 san=[127.0.0.1 192.168.50.61 kubernetes-upgrade-560677 localhost minikube]
	I1216 20:43:27.967039   49163 provision.go:177] copyRemoteCerts
	I1216 20:43:27.967099   49163 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 20:43:27.967121   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHHostname
	I1216 20:43:27.969903   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:27.970247   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:f3:06", ip: ""} in network mk-kubernetes-upgrade-560677: {Iface:virbr2 ExpiryTime:2024-12-16 21:43:18 +0000 UTC Type:0 Mac:52:54:00:0a:f3:06 Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:kubernetes-upgrade-560677 Clientid:01:52:54:00:0a:f3:06}
	I1216 20:43:27.970277   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined IP address 192.168.50.61 and MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:27.970500   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHPort
	I1216 20:43:27.970691   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHKeyPath
	I1216 20:43:27.970847   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHUsername
	I1216 20:43:27.970999   49163 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/kubernetes-upgrade-560677/id_rsa Username:docker}
	I1216 20:43:28.054098   49163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 20:43:28.082294   49163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1216 20:43:28.108364   49163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 20:43:28.133853   49163 provision.go:87] duration metric: took 376.508093ms to configureAuth
	I1216 20:43:28.133925   49163 buildroot.go:189] setting minikube options for container-runtime
	I1216 20:43:28.134139   49163 config.go:182] Loaded profile config "kubernetes-upgrade-560677": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1216 20:43:28.134241   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHHostname
	I1216 20:43:28.138225   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:28.138646   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:f3:06", ip: ""} in network mk-kubernetes-upgrade-560677: {Iface:virbr2 ExpiryTime:2024-12-16 21:43:18 +0000 UTC Type:0 Mac:52:54:00:0a:f3:06 Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:kubernetes-upgrade-560677 Clientid:01:52:54:00:0a:f3:06}
	I1216 20:43:28.138680   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined IP address 192.168.50.61 and MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:28.138882   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHPort
	I1216 20:43:28.139057   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHKeyPath
	I1216 20:43:28.139262   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHKeyPath
	I1216 20:43:28.139389   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHUsername
	I1216 20:43:28.139542   49163 main.go:141] libmachine: Using SSH client type: native
	I1216 20:43:28.139758   49163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.61 22 <nil> <nil>}
	I1216 20:43:28.139777   49163 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 20:43:28.371994   49163 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 20:43:28.372026   49163 main.go:141] libmachine: Checking connection to Docker...
	I1216 20:43:28.372036   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetURL
	I1216 20:43:28.373511   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | using libvirt version 6000000
	I1216 20:43:28.375881   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:28.376274   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:f3:06", ip: ""} in network mk-kubernetes-upgrade-560677: {Iface:virbr2 ExpiryTime:2024-12-16 21:43:18 +0000 UTC Type:0 Mac:52:54:00:0a:f3:06 Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:kubernetes-upgrade-560677 Clientid:01:52:54:00:0a:f3:06}
	I1216 20:43:28.376312   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined IP address 192.168.50.61 and MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:28.376512   49163 main.go:141] libmachine: Docker is up and running!
	I1216 20:43:28.376532   49163 main.go:141] libmachine: Reticulating splines...
	I1216 20:43:28.376552   49163 client.go:171] duration metric: took 25.348896223s to LocalClient.Create
	I1216 20:43:28.376583   49163 start.go:167] duration metric: took 25.348981241s to libmachine.API.Create "kubernetes-upgrade-560677"
	I1216 20:43:28.376599   49163 start.go:293] postStartSetup for "kubernetes-upgrade-560677" (driver="kvm2")
	I1216 20:43:28.376613   49163 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 20:43:28.376635   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .DriverName
	I1216 20:43:28.376854   49163 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 20:43:28.376878   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHHostname
	I1216 20:43:28.378840   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:28.379161   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:f3:06", ip: ""} in network mk-kubernetes-upgrade-560677: {Iface:virbr2 ExpiryTime:2024-12-16 21:43:18 +0000 UTC Type:0 Mac:52:54:00:0a:f3:06 Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:kubernetes-upgrade-560677 Clientid:01:52:54:00:0a:f3:06}
	I1216 20:43:28.379193   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined IP address 192.168.50.61 and MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:28.379312   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHPort
	I1216 20:43:28.379503   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHKeyPath
	I1216 20:43:28.379654   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHUsername
	I1216 20:43:28.379770   49163 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/kubernetes-upgrade-560677/id_rsa Username:docker}
	I1216 20:43:28.470038   49163 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 20:43:28.474634   49163 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 20:43:28.474662   49163 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/addons for local assets ...
	I1216 20:43:28.474727   49163 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/files for local assets ...
	I1216 20:43:28.474797   49163 filesync.go:149] local asset: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem -> 142542.pem in /etc/ssl/certs
	I1216 20:43:28.474880   49163 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 20:43:28.485245   49163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /etc/ssl/certs/142542.pem (1708 bytes)
	I1216 20:43:28.511674   49163 start.go:296] duration metric: took 135.056027ms for postStartSetup
	I1216 20:43:28.511733   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetConfigRaw
	I1216 20:43:28.512335   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetIP
	I1216 20:43:28.515071   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:28.515493   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:f3:06", ip: ""} in network mk-kubernetes-upgrade-560677: {Iface:virbr2 ExpiryTime:2024-12-16 21:43:18 +0000 UTC Type:0 Mac:52:54:00:0a:f3:06 Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:kubernetes-upgrade-560677 Clientid:01:52:54:00:0a:f3:06}
	I1216 20:43:28.515523   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined IP address 192.168.50.61 and MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:28.515749   49163 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/kubernetes-upgrade-560677/config.json ...
	I1216 20:43:28.515941   49163 start.go:128] duration metric: took 25.510785781s to createHost
	I1216 20:43:28.515965   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHHostname
	I1216 20:43:28.518047   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:28.518343   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:f3:06", ip: ""} in network mk-kubernetes-upgrade-560677: {Iface:virbr2 ExpiryTime:2024-12-16 21:43:18 +0000 UTC Type:0 Mac:52:54:00:0a:f3:06 Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:kubernetes-upgrade-560677 Clientid:01:52:54:00:0a:f3:06}
	I1216 20:43:28.518365   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined IP address 192.168.50.61 and MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:28.518544   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHPort
	I1216 20:43:28.518729   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHKeyPath
	I1216 20:43:28.518877   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHKeyPath
	I1216 20:43:28.519047   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHUsername
	I1216 20:43:28.519231   49163 main.go:141] libmachine: Using SSH client type: native
	I1216 20:43:28.519461   49163 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.61 22 <nil> <nil>}
	I1216 20:43:28.519515   49163 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 20:43:28.628178   49163 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734381808.601925905
	
	I1216 20:43:28.628212   49163 fix.go:216] guest clock: 1734381808.601925905
	I1216 20:43:28.628223   49163 fix.go:229] Guest: 2024-12-16 20:43:28.601925905 +0000 UTC Remote: 2024-12-16 20:43:28.515955382 +0000 UTC m=+49.974160639 (delta=85.970523ms)
	I1216 20:43:28.628279   49163 fix.go:200] guest clock delta is within tolerance: 85.970523ms
	I1216 20:43:28.628286   49163 start.go:83] releasing machines lock for "kubernetes-upgrade-560677", held for 25.623286353s
	I1216 20:43:28.628335   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .DriverName
	I1216 20:43:28.628613   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetIP
	I1216 20:43:28.631299   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:28.631677   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:f3:06", ip: ""} in network mk-kubernetes-upgrade-560677: {Iface:virbr2 ExpiryTime:2024-12-16 21:43:18 +0000 UTC Type:0 Mac:52:54:00:0a:f3:06 Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:kubernetes-upgrade-560677 Clientid:01:52:54:00:0a:f3:06}
	I1216 20:43:28.631711   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined IP address 192.168.50.61 and MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:28.631903   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .DriverName
	I1216 20:43:28.632480   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .DriverName
	I1216 20:43:28.632658   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .DriverName
	I1216 20:43:28.632751   49163 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 20:43:28.632793   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHHostname
	I1216 20:43:28.632861   49163 ssh_runner.go:195] Run: cat /version.json
	I1216 20:43:28.632888   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHHostname
	I1216 20:43:28.635465   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:28.635711   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:28.635837   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:f3:06", ip: ""} in network mk-kubernetes-upgrade-560677: {Iface:virbr2 ExpiryTime:2024-12-16 21:43:18 +0000 UTC Type:0 Mac:52:54:00:0a:f3:06 Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:kubernetes-upgrade-560677 Clientid:01:52:54:00:0a:f3:06}
	I1216 20:43:28.635861   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined IP address 192.168.50.61 and MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:28.636010   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHPort
	I1216 20:43:28.636142   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:f3:06", ip: ""} in network mk-kubernetes-upgrade-560677: {Iface:virbr2 ExpiryTime:2024-12-16 21:43:18 +0000 UTC Type:0 Mac:52:54:00:0a:f3:06 Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:kubernetes-upgrade-560677 Clientid:01:52:54:00:0a:f3:06}
	I1216 20:43:28.636186   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined IP address 192.168.50.61 and MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:28.636147   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHKeyPath
	I1216 20:43:28.636290   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHPort
	I1216 20:43:28.636397   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHKeyPath
	I1216 20:43:28.636403   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHUsername
	I1216 20:43:28.636510   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHUsername
	I1216 20:43:28.636595   49163 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/kubernetes-upgrade-560677/id_rsa Username:docker}
	I1216 20:43:28.636702   49163 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/kubernetes-upgrade-560677/id_rsa Username:docker}
	I1216 20:43:28.716933   49163 ssh_runner.go:195] Run: systemctl --version
	I1216 20:43:28.751587   49163 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 20:43:28.919674   49163 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 20:43:28.926603   49163 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 20:43:28.926671   49163 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 20:43:28.944919   49163 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 20:43:28.944941   49163 start.go:495] detecting cgroup driver to use...
	I1216 20:43:28.945031   49163 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 20:43:28.962278   49163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 20:43:28.977521   49163 docker.go:217] disabling cri-docker service (if available) ...
	I1216 20:43:28.977607   49163 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 20:43:28.992475   49163 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 20:43:29.007053   49163 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 20:43:29.135108   49163 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 20:43:29.298114   49163 docker.go:233] disabling docker service ...
	I1216 20:43:29.298170   49163 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 20:43:29.314720   49163 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 20:43:29.330926   49163 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 20:43:29.493223   49163 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 20:43:29.621168   49163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 20:43:29.637155   49163 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 20:43:29.657979   49163 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1216 20:43:29.658068   49163 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:43:29.669119   49163 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 20:43:29.669199   49163 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:43:29.681405   49163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:43:29.692537   49163 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:43:29.704274   49163 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 20:43:29.716059   49163 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 20:43:29.726159   49163 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 20:43:29.726228   49163 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 20:43:29.740914   49163 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 20:43:29.752436   49163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:43:29.891810   49163 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 20:43:30.008579   49163 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 20:43:30.008662   49163 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 20:43:30.013951   49163 start.go:563] Will wait 60s for crictl version
	I1216 20:43:30.014020   49163 ssh_runner.go:195] Run: which crictl
	I1216 20:43:30.018335   49163 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 20:43:30.060650   49163 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 20:43:30.060733   49163 ssh_runner.go:195] Run: crio --version
	I1216 20:43:30.093128   49163 ssh_runner.go:195] Run: crio --version
	I1216 20:43:30.129614   49163 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1216 20:43:30.130801   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetIP
	I1216 20:43:30.134108   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:30.134527   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:f3:06", ip: ""} in network mk-kubernetes-upgrade-560677: {Iface:virbr2 ExpiryTime:2024-12-16 21:43:18 +0000 UTC Type:0 Mac:52:54:00:0a:f3:06 Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:kubernetes-upgrade-560677 Clientid:01:52:54:00:0a:f3:06}
	I1216 20:43:30.134572   49163 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined IP address 192.168.50.61 and MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:43:30.134769   49163 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1216 20:43:30.139371   49163 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 20:43:30.153321   49163 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-560677 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-560677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.61 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 20:43:30.153437   49163 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1216 20:43:30.153485   49163 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 20:43:30.192774   49163 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1216 20:43:30.192840   49163 ssh_runner.go:195] Run: which lz4
	I1216 20:43:30.197490   49163 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 20:43:30.203129   49163 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 20:43:30.203178   49163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1216 20:43:32.076011   49163 crio.go:462] duration metric: took 1.878560788s to copy over tarball
	I1216 20:43:32.076105   49163 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 20:43:34.899014   49163 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.822875379s)
	I1216 20:43:34.899051   49163 crio.go:469] duration metric: took 2.823007356s to extract the tarball
	I1216 20:43:34.899067   49163 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 20:43:34.942167   49163 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 20:43:34.992992   49163 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1216 20:43:34.993017   49163 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1216 20:43:34.993083   49163 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:43:34.993139   49163 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1216 20:43:34.993168   49163 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1216 20:43:34.993201   49163 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 20:43:34.993154   49163 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1216 20:43:34.993104   49163 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1216 20:43:34.993387   49163 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1216 20:43:34.993528   49163 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1216 20:43:34.994829   49163 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1216 20:43:34.994842   49163 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:43:34.994830   49163 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 20:43:34.994881   49163 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1216 20:43:34.994908   49163 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1216 20:43:34.994914   49163 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1216 20:43:34.994920   49163 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1216 20:43:34.994940   49163 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1216 20:43:35.201105   49163 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1216 20:43:35.218165   49163 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1216 20:43:35.223108   49163 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1216 20:43:35.242477   49163 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 20:43:35.251639   49163 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1216 20:43:35.268584   49163 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1216 20:43:35.268631   49163 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1216 20:43:35.268678   49163 ssh_runner.go:195] Run: which crictl
	I1216 20:43:35.280491   49163 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1216 20:43:35.284824   49163 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1216 20:43:35.316493   49163 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1216 20:43:35.316537   49163 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1216 20:43:35.316582   49163 ssh_runner.go:195] Run: which crictl
	I1216 20:43:35.347175   49163 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1216 20:43:35.347211   49163 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1216 20:43:35.347266   49163 ssh_runner.go:195] Run: which crictl
	I1216 20:43:35.381477   49163 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1216 20:43:35.381526   49163 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 20:43:35.381578   49163 ssh_runner.go:195] Run: which crictl
	I1216 20:43:35.398044   49163 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1216 20:43:35.398099   49163 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1216 20:43:35.398102   49163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1216 20:43:35.398146   49163 ssh_runner.go:195] Run: which crictl
	I1216 20:43:35.420710   49163 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1216 20:43:35.420753   49163 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1216 20:43:35.420792   49163 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1216 20:43:35.420846   49163 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1216 20:43:35.420888   49163 ssh_runner.go:195] Run: which crictl
	I1216 20:43:35.420927   49163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1216 20:43:35.420951   49163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 20:43:35.420847   49163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1216 20:43:35.420807   49163 ssh_runner.go:195] Run: which crictl
	I1216 20:43:35.421000   49163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1216 20:43:35.481368   49163 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:43:35.502519   49163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1216 20:43:35.502519   49163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1216 20:43:35.534256   49163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1216 20:43:35.534307   49163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1216 20:43:35.553554   49163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1216 20:43:35.567043   49163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1216 20:43:35.567096   49163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 20:43:35.851567   49163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1216 20:43:35.851626   49163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1216 20:43:35.851714   49163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1216 20:43:35.851740   49163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1216 20:43:35.851775   49163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1216 20:43:35.851845   49163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 20:43:35.851861   49163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1216 20:43:35.963910   49163 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1216 20:43:35.990140   49163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1216 20:43:36.021607   49163 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1216 20:43:36.021674   49163 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1216 20:43:36.021689   49163 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1216 20:43:36.021700   49163 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1216 20:43:36.021773   49163 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1216 20:43:36.056698   49163 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1216 20:43:36.066587   49163 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1216 20:43:36.066655   49163 cache_images.go:92] duration metric: took 1.073619209s to LoadCachedImages
	W1216 20:43:36.066736   49163 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I1216 20:43:36.066756   49163 kubeadm.go:934] updating node { 192.168.50.61 8443 v1.20.0 crio true true} ...
	I1216 20:43:36.066887   49163 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-560677 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-560677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 20:43:36.066977   49163 ssh_runner.go:195] Run: crio config
	I1216 20:43:36.119892   49163 cni.go:84] Creating CNI manager for ""
	I1216 20:43:36.119912   49163 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 20:43:36.119921   49163 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 20:43:36.119939   49163 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.61 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-560677 NodeName:kubernetes-upgrade-560677 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.61 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1216 20:43:36.120070   49163 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-560677"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 20:43:36.120147   49163 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1216 20:43:36.134297   49163 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 20:43:36.134376   49163 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 20:43:36.148169   49163 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I1216 20:43:36.170097   49163 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 20:43:36.191366   49163 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1216 20:43:36.210895   49163 ssh_runner.go:195] Run: grep 192.168.50.61	control-plane.minikube.internal$ /etc/hosts
	I1216 20:43:36.215324   49163 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.61	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 20:43:36.229569   49163 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:43:36.383887   49163 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 20:43:36.403399   49163 certs.go:68] Setting up /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/kubernetes-upgrade-560677 for IP: 192.168.50.61
	I1216 20:43:36.403428   49163 certs.go:194] generating shared ca certs ...
	I1216 20:43:36.403461   49163 certs.go:226] acquiring lock for ca certs: {Name:mk7f8f83a04be3d39897a025f51d4d8228b5a509 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:43:36.403662   49163 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key
	I1216 20:43:36.403726   49163 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key
	I1216 20:43:36.403740   49163 certs.go:256] generating profile certs ...
	I1216 20:43:36.403807   49163 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/kubernetes-upgrade-560677/client.key
	I1216 20:43:36.403835   49163 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/kubernetes-upgrade-560677/client.crt with IP's: []
	I1216 20:43:36.846883   49163 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/kubernetes-upgrade-560677/client.crt ...
	I1216 20:43:36.846918   49163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/kubernetes-upgrade-560677/client.crt: {Name:mk87adba956007f9a533fd8f07cef52d15fe9f0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:43:36.847110   49163 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/kubernetes-upgrade-560677/client.key ...
	I1216 20:43:36.847131   49163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/kubernetes-upgrade-560677/client.key: {Name:mk3176f92d6b9d3f0ea4cc737ca8ee0081cf0ae1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:43:36.847218   49163 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/kubernetes-upgrade-560677/apiserver.key.9a37601c
	I1216 20:43:36.847236   49163 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/kubernetes-upgrade-560677/apiserver.crt.9a37601c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.61]
	I1216 20:43:37.007977   49163 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/kubernetes-upgrade-560677/apiserver.crt.9a37601c ...
	I1216 20:43:37.008026   49163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/kubernetes-upgrade-560677/apiserver.crt.9a37601c: {Name:mkfedc90bff0d88fd2ee64b42ce2c5c8f75a71c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:43:37.008220   49163 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/kubernetes-upgrade-560677/apiserver.key.9a37601c ...
	I1216 20:43:37.008245   49163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/kubernetes-upgrade-560677/apiserver.key.9a37601c: {Name:mk77b1865a4eb4d7d4a322d365520db322b495e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:43:37.008354   49163 certs.go:381] copying /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/kubernetes-upgrade-560677/apiserver.crt.9a37601c -> /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/kubernetes-upgrade-560677/apiserver.crt
	I1216 20:43:37.008453   49163 certs.go:385] copying /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/kubernetes-upgrade-560677/apiserver.key.9a37601c -> /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/kubernetes-upgrade-560677/apiserver.key
	I1216 20:43:37.008538   49163 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/kubernetes-upgrade-560677/proxy-client.key
	I1216 20:43:37.008563   49163 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/kubernetes-upgrade-560677/proxy-client.crt with IP's: []
	I1216 20:43:37.131255   49163 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/kubernetes-upgrade-560677/proxy-client.crt ...
	I1216 20:43:37.131286   49163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/kubernetes-upgrade-560677/proxy-client.crt: {Name:mk079971651e0d3793b65da84ce3b53973f6b7dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:43:37.131466   49163 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/kubernetes-upgrade-560677/proxy-client.key ...
	I1216 20:43:37.131485   49163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/kubernetes-upgrade-560677/proxy-client.key: {Name:mke5f33cc5e77154259895775c3161270db486e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:43:37.131697   49163 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem (1338 bytes)
	W1216 20:43:37.131737   49163 certs.go:480] ignoring /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254_empty.pem, impossibly tiny 0 bytes
	I1216 20:43:37.131746   49163 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 20:43:37.131780   49163 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem (1082 bytes)
	I1216 20:43:37.131815   49163 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem (1123 bytes)
	I1216 20:43:37.131841   49163 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem (1679 bytes)
	I1216 20:43:37.131901   49163 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem (1708 bytes)
	I1216 20:43:37.132619   49163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 20:43:37.162877   49163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 20:43:37.188900   49163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 20:43:37.218559   49163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 20:43:37.256594   49163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/kubernetes-upgrade-560677/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1216 20:43:37.297551   49163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/kubernetes-upgrade-560677/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 20:43:37.335712   49163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/kubernetes-upgrade-560677/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 20:43:37.362700   49163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/kubernetes-upgrade-560677/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 20:43:37.390613   49163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /usr/share/ca-certificates/142542.pem (1708 bytes)
	I1216 20:43:37.418465   49163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 20:43:37.446212   49163 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem --> /usr/share/ca-certificates/14254.pem (1338 bytes)
	I1216 20:43:37.472922   49163 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 20:43:37.492624   49163 ssh_runner.go:195] Run: openssl version
	I1216 20:43:37.499801   49163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142542.pem && ln -fs /usr/share/ca-certificates/142542.pem /etc/ssl/certs/142542.pem"
	I1216 20:43:37.512789   49163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142542.pem
	I1216 20:43:37.518355   49163 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 19:42 /usr/share/ca-certificates/142542.pem
	I1216 20:43:37.518430   49163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142542.pem
	I1216 20:43:37.525211   49163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142542.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 20:43:37.537991   49163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 20:43:37.550392   49163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:43:37.555379   49163 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:43:37.555447   49163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:43:37.561521   49163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 20:43:37.573736   49163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14254.pem && ln -fs /usr/share/ca-certificates/14254.pem /etc/ssl/certs/14254.pem"
	I1216 20:43:37.586093   49163 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14254.pem
	I1216 20:43:37.591144   49163 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 19:42 /usr/share/ca-certificates/14254.pem
	I1216 20:43:37.591224   49163 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14254.pem
	I1216 20:43:37.599714   49163 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14254.pem /etc/ssl/certs/51391683.0"
	I1216 20:43:37.611930   49163 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 20:43:37.617699   49163 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 20:43:37.617762   49163 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-560677 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-560677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.61 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 20:43:37.617863   49163 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 20:43:37.617948   49163 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 20:43:37.667339   49163 cri.go:89] found id: ""
	I1216 20:43:37.667421   49163 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 20:43:37.679800   49163 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 20:43:37.692387   49163 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 20:43:37.702522   49163 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 20:43:37.702543   49163 kubeadm.go:157] found existing configuration files:
	
	I1216 20:43:37.702586   49163 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 20:43:37.714009   49163 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 20:43:37.714066   49163 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 20:43:37.726104   49163 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 20:43:37.737670   49163 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 20:43:37.737738   49163 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 20:43:37.750008   49163 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 20:43:37.761655   49163 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 20:43:37.761716   49163 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 20:43:37.773747   49163 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 20:43:37.785105   49163 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 20:43:37.785181   49163 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 20:43:37.795492   49163 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 20:43:37.912647   49163 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1216 20:43:37.912728   49163 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 20:43:38.116616   49163 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 20:43:38.116768   49163 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 20:43:38.116922   49163 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1216 20:43:38.318670   49163 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 20:43:38.448378   49163 out.go:235]   - Generating certificates and keys ...
	I1216 20:43:38.448529   49163 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 20:43:38.448622   49163 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 20:43:38.549713   49163 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 20:43:38.741930   49163 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1216 20:43:38.853166   49163 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1216 20:43:38.930360   49163 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1216 20:43:39.155807   49163 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1216 20:43:39.156115   49163 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-560677 localhost] and IPs [192.168.50.61 127.0.0.1 ::1]
	I1216 20:43:39.248523   49163 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1216 20:43:39.248958   49163 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-560677 localhost] and IPs [192.168.50.61 127.0.0.1 ::1]
	I1216 20:43:39.398393   49163 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 20:43:39.517868   49163 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 20:43:39.674951   49163 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1216 20:43:39.675092   49163 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 20:43:39.764614   49163 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 20:43:39.879841   49163 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 20:43:40.077344   49163 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 20:43:40.362091   49163 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 20:43:40.380585   49163 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 20:43:40.380765   49163 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 20:43:40.380843   49163 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 20:43:40.525213   49163 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 20:43:40.527320   49163 out.go:235]   - Booting up control plane ...
	I1216 20:43:40.527460   49163 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 20:43:40.528056   49163 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 20:43:40.546406   49163 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 20:43:40.547429   49163 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 20:43:40.554197   49163 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1216 20:44:20.548198   49163 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1216 20:44:20.549219   49163 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 20:44:20.549471   49163 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 20:44:25.549645   49163 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 20:44:25.549895   49163 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 20:44:35.549040   49163 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 20:44:35.549316   49163 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 20:44:55.549065   49163 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 20:44:55.549349   49163 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 20:45:35.550882   49163 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 20:45:35.551128   49163 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 20:45:35.551150   49163 kubeadm.go:310] 
	I1216 20:45:35.551227   49163 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1216 20:45:35.551303   49163 kubeadm.go:310] 		timed out waiting for the condition
	I1216 20:45:35.551316   49163 kubeadm.go:310] 
	I1216 20:45:35.551384   49163 kubeadm.go:310] 	This error is likely caused by:
	I1216 20:45:35.551451   49163 kubeadm.go:310] 		- The kubelet is not running
	I1216 20:45:35.551591   49163 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 20:45:35.551609   49163 kubeadm.go:310] 
	I1216 20:45:35.551768   49163 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 20:45:35.551828   49163 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1216 20:45:35.551888   49163 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1216 20:45:35.551897   49163 kubeadm.go:310] 
	I1216 20:45:35.552005   49163 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1216 20:45:35.552085   49163 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1216 20:45:35.552092   49163 kubeadm.go:310] 
	I1216 20:45:35.552186   49163 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1216 20:45:35.552284   49163 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1216 20:45:35.552377   49163 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1216 20:45:35.552488   49163 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1216 20:45:35.552501   49163 kubeadm.go:310] 
	I1216 20:45:35.553728   49163 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 20:45:35.553861   49163 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1216 20:45:35.553943   49163 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1216 20:45:35.554096   49163 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-560677 localhost] and IPs [192.168.50.61 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-560677 localhost] and IPs [192.168.50.61 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-560677 localhost] and IPs [192.168.50.61 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-560677 localhost] and IPs [192.168.50.61 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 20:45:35.554135   49163 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 20:45:38.916520   49163 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.362350669s)
	I1216 20:45:38.916621   49163 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 20:45:38.933129   49163 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 20:45:38.944593   49163 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 20:45:38.944617   49163 kubeadm.go:157] found existing configuration files:
	
	I1216 20:45:38.944663   49163 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 20:45:38.954967   49163 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 20:45:38.955052   49163 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 20:45:38.965837   49163 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 20:45:38.976304   49163 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 20:45:38.976379   49163 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 20:45:38.987706   49163 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 20:45:38.998479   49163 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 20:45:38.998534   49163 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 20:45:39.010177   49163 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 20:45:39.020524   49163 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 20:45:39.020602   49163 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 20:45:39.031413   49163 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 20:45:39.103008   49163 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1216 20:45:39.103070   49163 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 20:45:39.247369   49163 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 20:45:39.247475   49163 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 20:45:39.247575   49163 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1216 20:45:39.446527   49163 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 20:45:39.448681   49163 out.go:235]   - Generating certificates and keys ...
	I1216 20:45:39.448795   49163 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 20:45:39.448875   49163 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 20:45:39.449007   49163 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 20:45:39.449127   49163 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 20:45:39.449209   49163 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 20:45:39.449258   49163 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 20:45:39.449323   49163 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 20:45:39.450143   49163 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 20:45:39.451352   49163 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 20:45:39.452558   49163 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 20:45:39.453021   49163 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 20:45:39.453105   49163 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 20:45:39.558385   49163 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 20:45:39.651828   49163 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 20:45:39.945160   49163 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 20:45:39.989014   49163 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 20:45:40.007443   49163 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 20:45:40.010146   49163 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 20:45:40.010483   49163 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 20:45:40.170749   49163 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 20:45:40.172791   49163 out.go:235]   - Booting up control plane ...
	I1216 20:45:40.172931   49163 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 20:45:40.185620   49163 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 20:45:40.186921   49163 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 20:45:40.188441   49163 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 20:45:40.196722   49163 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1216 20:46:20.200445   49163 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1216 20:46:20.200596   49163 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 20:46:20.200850   49163 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 20:46:25.201626   49163 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 20:46:25.201909   49163 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 20:46:35.202269   49163 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 20:46:35.202487   49163 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 20:46:55.201438   49163 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 20:46:55.201956   49163 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 20:47:35.201501   49163 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 20:47:35.202371   49163 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 20:47:35.202391   49163 kubeadm.go:310] 
	I1216 20:47:35.202471   49163 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1216 20:47:35.202572   49163 kubeadm.go:310] 		timed out waiting for the condition
	I1216 20:47:35.202597   49163 kubeadm.go:310] 
	I1216 20:47:35.202659   49163 kubeadm.go:310] 	This error is likely caused by:
	I1216 20:47:35.202709   49163 kubeadm.go:310] 		- The kubelet is not running
	I1216 20:47:35.202877   49163 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 20:47:35.202890   49163 kubeadm.go:310] 
	I1216 20:47:35.203047   49163 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 20:47:35.203106   49163 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1216 20:47:35.203168   49163 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1216 20:47:35.203179   49163 kubeadm.go:310] 
	I1216 20:47:35.203340   49163 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1216 20:47:35.203481   49163 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1216 20:47:35.203492   49163 kubeadm.go:310] 
	I1216 20:47:35.203674   49163 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1216 20:47:35.203817   49163 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1216 20:47:35.203917   49163 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1216 20:47:35.204026   49163 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1216 20:47:35.204039   49163 kubeadm.go:310] 
	I1216 20:47:35.204716   49163 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 20:47:35.204830   49163 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1216 20:47:35.204912   49163 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1216 20:47:35.204985   49163 kubeadm.go:394] duration metric: took 3m57.587228844s to StartCluster
	I1216 20:47:35.205037   49163 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 20:47:35.205105   49163 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 20:47:35.264893   49163 cri.go:89] found id: ""
	I1216 20:47:35.264925   49163 logs.go:282] 0 containers: []
	W1216 20:47:35.264936   49163 logs.go:284] No container was found matching "kube-apiserver"
	I1216 20:47:35.264946   49163 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 20:47:35.265012   49163 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 20:47:35.316853   49163 cri.go:89] found id: ""
	I1216 20:47:35.316898   49163 logs.go:282] 0 containers: []
	W1216 20:47:35.316911   49163 logs.go:284] No container was found matching "etcd"
	I1216 20:47:35.316920   49163 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 20:47:35.316983   49163 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 20:47:35.370327   49163 cri.go:89] found id: ""
	I1216 20:47:35.370364   49163 logs.go:282] 0 containers: []
	W1216 20:47:35.370375   49163 logs.go:284] No container was found matching "coredns"
	I1216 20:47:35.370385   49163 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 20:47:35.370463   49163 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 20:47:35.421243   49163 cri.go:89] found id: ""
	I1216 20:47:35.421281   49163 logs.go:282] 0 containers: []
	W1216 20:47:35.421292   49163 logs.go:284] No container was found matching "kube-scheduler"
	I1216 20:47:35.421300   49163 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 20:47:35.421374   49163 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 20:47:35.457263   49163 cri.go:89] found id: ""
	I1216 20:47:35.457297   49163 logs.go:282] 0 containers: []
	W1216 20:47:35.457309   49163 logs.go:284] No container was found matching "kube-proxy"
	I1216 20:47:35.457318   49163 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 20:47:35.457414   49163 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 20:47:35.498078   49163 cri.go:89] found id: ""
	I1216 20:47:35.498112   49163 logs.go:282] 0 containers: []
	W1216 20:47:35.498125   49163 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 20:47:35.498133   49163 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 20:47:35.498200   49163 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 20:47:35.540574   49163 cri.go:89] found id: ""
	I1216 20:47:35.540605   49163 logs.go:282] 0 containers: []
	W1216 20:47:35.540616   49163 logs.go:284] No container was found matching "kindnet"
	I1216 20:47:35.540629   49163 logs.go:123] Gathering logs for container status ...
	I1216 20:47:35.540645   49163 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 20:47:35.593483   49163 logs.go:123] Gathering logs for kubelet ...
	I1216 20:47:35.593526   49163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 20:47:35.647673   49163 logs.go:123] Gathering logs for dmesg ...
	I1216 20:47:35.647717   49163 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 20:47:35.666482   49163 logs.go:123] Gathering logs for describe nodes ...
	I1216 20:47:35.666515   49163 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 20:47:35.789578   49163 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 20:47:35.789609   49163 logs.go:123] Gathering logs for CRI-O ...
	I1216 20:47:35.789628   49163 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1216 20:47:35.895563   49163 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1216 20:47:35.895689   49163 out.go:270] * 
	* 
	W1216 20:47:35.895874   49163 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 20:47:35.895917   49163 out.go:270] * 
	* 
	W1216 20:47:35.897246   49163 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 20:47:35.900990   49163 out.go:201] 
	W1216 20:47:35.902586   49163 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 20:47:35.902648   49163 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 20:47:35.902688   49163 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 20:47:35.905308   49163 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-560677 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-560677
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-560677: (1.69035813s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-560677 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-560677 status --format={{.Host}}: exit status 7 (69.74952ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-560677 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-560677 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m18.949671292s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-560677 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-560677 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-560677 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (93.867226ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-560677] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20091
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20091-7083/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-7083/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-560677
	    minikube start -p kubernetes-upgrade-560677 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5606772 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.0, by running:
	    
	    minikube start -p kubernetes-upgrade-560677 --kubernetes-version=v1.32.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-560677 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-560677 --memory=2200 --kubernetes-version=v1.32.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m10.63741089s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-12-16 20:50:07.463230125 +0000 UTC m=+4525.936877853
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-560677 -n kubernetes-upgrade-560677
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-560677 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-560677 logs -n 25: (4.843537751s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-647112 sudo                 | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | systemctl cat crio --no-pager         |                           |         |         |                     |                     |
	| ssh     | -p cilium-647112 sudo find            | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | /etc/crio -type f -exec sh -c         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-647112 sudo crio            | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | config                                |                           |         |         |                     |                     |
	| delete  | -p cilium-647112                      | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC | 16 Dec 24 20:46 UTC |
	| ssh     | -p NoKubernetes-545724 sudo           | NoKubernetes-545724       | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-545724                | NoKubernetes-545724       | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC | 16 Dec 24 20:46 UTC |
	| start   | -p force-systemd-env-893512           | force-systemd-env-893512  | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC | 16 Dec 24 20:47 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p force-systemd-flag-406516          | force-systemd-flag-406516 | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC | 16 Dec 24 20:47 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p pause-022944                       | pause-022944              | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC | 16 Dec 24 20:46 UTC |
	| start   | -p cert-expiration-270954             | cert-expiration-270954    | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC | 16 Dec 24 20:48 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-893512           | force-systemd-env-893512  | jenkins | v1.34.0 | 16 Dec 24 20:47 UTC | 16 Dec 24 20:47 UTC |
	| start   | -p cert-options-254143                | cert-options-254143       | jenkins | v1.34.0 | 16 Dec 24 20:47 UTC | 16 Dec 24 20:48 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-560677          | kubernetes-upgrade-560677 | jenkins | v1.34.0 | 16 Dec 24 20:47 UTC | 16 Dec 24 20:47 UTC |
	| start   | -p kubernetes-upgrade-560677          | kubernetes-upgrade-560677 | jenkins | v1.34.0 | 16 Dec 24 20:47 UTC | 16 Dec 24 20:48 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-406516 ssh cat     | force-systemd-flag-406516 | jenkins | v1.34.0 | 16 Dec 24 20:47 UTC | 16 Dec 24 20:47 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-406516          | force-systemd-flag-406516 | jenkins | v1.34.0 | 16 Dec 24 20:47 UTC | 16 Dec 24 20:47 UTC |
	| start   | -p stopped-upgrade-976873             | minikube                  | jenkins | v1.26.0 | 16 Dec 24 20:47 UTC | 16 Dec 24 20:49 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| ssh     | cert-options-254143 ssh               | cert-options-254143       | jenkins | v1.34.0 | 16 Dec 24 20:48 UTC | 16 Dec 24 20:48 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-254143 -- sudo        | cert-options-254143       | jenkins | v1.34.0 | 16 Dec 24 20:48 UTC | 16 Dec 24 20:48 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-254143                | cert-options-254143       | jenkins | v1.34.0 | 16 Dec 24 20:48 UTC | 16 Dec 24 20:48 UTC |
	| start   | -p old-k8s-version-847766             | old-k8s-version-847766    | jenkins | v1.34.0 | 16 Dec 24 20:48 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --kvm-network=default                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts               |                           |         |         |                     |                     |
	|         | --keep-context=false                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-560677          | kubernetes-upgrade-560677 | jenkins | v1.34.0 | 16 Dec 24 20:48 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-560677          | kubernetes-upgrade-560677 | jenkins | v1.34.0 | 16 Dec 24 20:48 UTC | 16 Dec 24 20:50 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-976873 stop           | minikube                  | jenkins | v1.26.0 | 16 Dec 24 20:49 UTC | 16 Dec 24 20:49 UTC |
	| start   | -p stopped-upgrade-976873             | stopped-upgrade-976873    | jenkins | v1.34.0 | 16 Dec 24 20:49 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 20:49:38
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 20:49:38.346304   57211 out.go:345] Setting OutFile to fd 1 ...
	I1216 20:49:38.346525   57211 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:49:38.346533   57211 out.go:358] Setting ErrFile to fd 2...
	I1216 20:49:38.346538   57211 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:49:38.346759   57211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-7083/.minikube/bin
	I1216 20:49:38.347345   57211 out.go:352] Setting JSON to false
	I1216 20:49:38.348398   57211 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5523,"bootTime":1734376655,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 20:49:38.348506   57211 start.go:139] virtualization: kvm guest
	I1216 20:49:38.369313   57211 out.go:177] * [stopped-upgrade-976873] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1216 20:49:38.405007   57211 notify.go:220] Checking for updates...
	I1216 20:49:38.405080   57211 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 20:49:38.488762   57211 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 20:49:38.575335   57211 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 20:49:38.708363   57211 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-7083/.minikube
	I1216 20:49:38.846820   57211 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 20:49:38.888168   57211 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 20:49:38.958242   57211 config.go:182] Loaded profile config "stopped-upgrade-976873": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1216 20:49:38.958826   57211 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:49:38.958930   57211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:49:38.975222   57211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36019
	I1216 20:49:38.975740   57211 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:49:38.976252   57211 main.go:141] libmachine: Using API Version  1
	I1216 20:49:38.976278   57211 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:49:38.976686   57211 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:49:38.976854   57211 main.go:141] libmachine: (stopped-upgrade-976873) Calling .DriverName
	I1216 20:49:39.063981   57211 out.go:177] * Kubernetes 1.32.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.0
	I1216 20:49:39.163776   57211 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 20:49:39.164314   57211 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:49:39.164372   57211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:49:39.179547   57211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46837
	I1216 20:49:39.180055   57211 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:49:39.180660   57211 main.go:141] libmachine: Using API Version  1
	I1216 20:49:39.180686   57211 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:49:39.181022   57211 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:49:39.181269   57211 main.go:141] libmachine: (stopped-upgrade-976873) Calling .DriverName
	I1216 20:49:39.340303   57211 out.go:177] * Using the kvm2 driver based on existing profile
	I1216 20:49:39.477095   57211 start.go:297] selected driver: kvm2
	I1216 20:49:39.477152   57211 start.go:901] validating driver "kvm2" against &{Name:stopped-upgrade-976873 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-976
873 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.124 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1216 20:49:39.477289   57211 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 20:49:39.478091   57211 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 20:49:39.478193   57211 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20091-7083/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1216 20:49:39.494707   57211 install.go:137] /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1216 20:49:39.495224   57211 cni.go:84] Creating CNI manager for ""
	I1216 20:49:39.495324   57211 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 20:49:39.495412   57211 start.go:340] cluster config:
	{Name:stopped-upgrade-976873 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-976873 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.124 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1216 20:49:39.495583   57211 iso.go:125] acquiring lock: {Name:mk60ed2ba7ed00047edacd09f4f6bf84214f0831 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 20:49:39.630807   57211 out.go:177] * Starting "stopped-upgrade-976873" primary control-plane node in "stopped-upgrade-976873" cluster
	I1216 20:49:41.783109   56772 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 20:49:41.783147   56772 machine.go:96] duration metric: took 9.814791233s to provisionDockerMachine
	I1216 20:49:41.783162   56772 start.go:293] postStartSetup for "kubernetes-upgrade-560677" (driver="kvm2")
	I1216 20:49:41.783176   56772 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 20:49:41.783220   56772 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .DriverName
	I1216 20:49:41.783566   56772 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 20:49:41.783603   56772 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHHostname
	I1216 20:49:41.786876   56772 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:49:41.787420   56772 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:f3:06", ip: ""} in network mk-kubernetes-upgrade-560677: {Iface:virbr2 ExpiryTime:2024-12-16 21:48:32 +0000 UTC Type:0 Mac:52:54:00:0a:f3:06 Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:kubernetes-upgrade-560677 Clientid:01:52:54:00:0a:f3:06}
	I1216 20:49:41.787454   56772 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined IP address 192.168.50.61 and MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:49:41.787621   56772 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHPort
	I1216 20:49:41.787878   56772 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHKeyPath
	I1216 20:49:41.788078   56772 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHUsername
	I1216 20:49:41.788256   56772 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/kubernetes-upgrade-560677/id_rsa Username:docker}
	I1216 20:49:38.373242   56531 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.833883979s)
	I1216 20:49:38.373283   56531 crio.go:469] duration metric: took 2.834048362s to extract the tarball
	I1216 20:49:38.373294   56531 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 20:49:38.421281   56531 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 20:49:38.592770   56531 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1216 20:49:38.592804   56531 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1216 20:49:38.592856   56531 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:49:38.592907   56531 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1216 20:49:38.592935   56531 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1216 20:49:38.592972   56531 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1216 20:49:38.593155   56531 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1216 20:49:38.593165   56531 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1216 20:49:38.593170   56531 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 20:49:38.593244   56531 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1216 20:49:38.594547   56531 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1216 20:49:38.594582   56531 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1216 20:49:38.594607   56531 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1216 20:49:38.594613   56531 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1216 20:49:38.594633   56531 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 20:49:38.594644   56531 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1216 20:49:38.594698   56531 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:49:38.594710   56531 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1216 20:49:38.784643   56531 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1216 20:49:38.817492   56531 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1216 20:49:38.820204   56531 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1216 20:49:38.823847   56531 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1216 20:49:38.836163   56531 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1216 20:49:38.836212   56531 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1216 20:49:38.836249   56531 ssh_runner.go:195] Run: which crictl
	I1216 20:49:38.848477   56531 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1216 20:49:38.901816   56531 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 20:49:38.913492   56531 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1216 20:49:38.943280   56531 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1216 20:49:38.943337   56531 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1216 20:49:38.943397   56531 ssh_runner.go:195] Run: which crictl
	I1216 20:49:38.943418   56531 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1216 20:49:38.943458   56531 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1216 20:49:38.943523   56531 ssh_runner.go:195] Run: which crictl
	I1216 20:49:38.951348   56531 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1216 20:49:38.951385   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1216 20:49:38.951397   56531 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1216 20:49:38.951435   56531 ssh_runner.go:195] Run: which crictl
	I1216 20:49:38.995047   56531 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1216 20:49:38.995114   56531 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1216 20:49:38.995167   56531 ssh_runner.go:195] Run: which crictl
	I1216 20:49:39.011688   56531 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:49:39.036321   56531 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1216 20:49:39.036362   56531 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 20:49:39.036407   56531 ssh_runner.go:195] Run: which crictl
	I1216 20:49:39.036412   56531 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1216 20:49:39.036452   56531 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1216 20:49:39.036497   56531 ssh_runner.go:195] Run: which crictl
	I1216 20:49:39.036525   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1216 20:49:39.036557   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1216 20:49:39.036588   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1216 20:49:39.061033   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1216 20:49:39.061081   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1216 20:49:39.268715   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 20:49:39.268775   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1216 20:49:39.268822   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1216 20:49:39.268888   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1216 20:49:39.268909   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1216 20:49:39.268976   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1216 20:49:39.269003   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1216 20:49:39.428104   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1216 20:49:39.428152   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1216 20:49:39.428200   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1216 20:49:39.428228   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 20:49:39.428256   56531 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1216 20:49:39.428331   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1216 20:49:39.428483   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1216 20:49:39.542834   56531 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1216 20:49:39.553237   56531 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1216 20:49:39.554709   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 20:49:39.554729   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1216 20:49:39.560655   56531 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1216 20:49:39.560697   56531 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1216 20:49:39.610889   56531 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1216 20:49:39.612694   56531 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1216 20:49:39.612747   56531 cache_images.go:92] duration metric: took 1.019930851s to LoadCachedImages
	W1216 20:49:39.612825   56531 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I1216 20:49:39.612841   56531 kubeadm.go:934] updating node { 192.168.72.240 8443 v1.20.0 crio true true} ...
	I1216 20:49:39.612983   56531 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-847766 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-847766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 20:49:39.613062   56531 ssh_runner.go:195] Run: crio config
	I1216 20:49:39.665201   56531 cni.go:84] Creating CNI manager for ""
	I1216 20:49:39.665232   56531 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 20:49:39.665244   56531 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 20:49:39.665265   56531 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.240 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-847766 NodeName:old-k8s-version-847766 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1216 20:49:39.665408   56531 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-847766"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.240
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.240"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 20:49:39.665465   56531 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1216 20:49:39.675824   56531 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 20:49:39.675887   56531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 20:49:39.686107   56531 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1216 20:49:39.703746   56531 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 20:49:39.720998   56531 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1216 20:49:39.741871   56531 ssh_runner.go:195] Run: grep 192.168.72.240	control-plane.minikube.internal$ /etc/hosts
	I1216 20:49:39.747296   56531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.240	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 20:49:39.762495   56531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:49:39.896834   56531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 20:49:39.915118   56531 certs.go:68] Setting up /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766 for IP: 192.168.72.240
	I1216 20:49:39.915142   56531 certs.go:194] generating shared ca certs ...
	I1216 20:49:39.915163   56531 certs.go:226] acquiring lock for ca certs: {Name:mk7f8f83a04be3d39897a025f51d4d8228b5a509 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:49:39.915358   56531 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key
	I1216 20:49:39.915399   56531 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key
	I1216 20:49:39.915406   56531 certs.go:256] generating profile certs ...
	I1216 20:49:39.915473   56531 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/client.key
	I1216 20:49:39.915502   56531 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/client.crt with IP's: []
	I1216 20:49:39.987915   56531 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/client.crt ...
	I1216 20:49:39.987951   56531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/client.crt: {Name:mk1b3cb29709881f505e20ebed154122396e997a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:49:39.988147   56531 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/client.key ...
	I1216 20:49:39.988167   56531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/client.key: {Name:mke44337bded7eaafc49a0d47a7b3425df020e7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:49:39.988279   56531 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.key.6c8704df
	I1216 20:49:39.988300   56531 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.crt.6c8704df with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.240]
	I1216 20:49:40.295443   56531 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.crt.6c8704df ...
	I1216 20:49:40.295484   56531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.crt.6c8704df: {Name:mk232487c862ec228ef6989676c146335af7baf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:49:40.295680   56531 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.key.6c8704df ...
	I1216 20:49:40.295698   56531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.key.6c8704df: {Name:mkf65e89456c49009688dd53057c1791551574d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:49:40.295799   56531 certs.go:381] copying /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.crt.6c8704df -> /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.crt
	I1216 20:49:40.295893   56531 certs.go:385] copying /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.key.6c8704df -> /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.key
	I1216 20:49:40.295970   56531 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/proxy-client.key
	I1216 20:49:40.295995   56531 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/proxy-client.crt with IP's: []
	I1216 20:49:40.404276   56531 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/proxy-client.crt ...
	I1216 20:49:40.404308   56531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/proxy-client.crt: {Name:mk77caa74d193d4defbb3235785fc2b444b7a7b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:49:40.442338   56531 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/proxy-client.key ...
	I1216 20:49:40.442381   56531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/proxy-client.key: {Name:mk97e44419ef6855e928d6392f9fd28fb9baf09c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:49:40.442606   56531 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem (1338 bytes)
	W1216 20:49:40.442668   56531 certs.go:480] ignoring /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254_empty.pem, impossibly tiny 0 bytes
	I1216 20:49:40.442685   56531 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 20:49:40.442726   56531 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem (1082 bytes)
	I1216 20:49:40.442760   56531 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem (1123 bytes)
	I1216 20:49:40.442798   56531 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem (1679 bytes)
	I1216 20:49:40.442854   56531 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem (1708 bytes)
	I1216 20:49:40.443492   56531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 20:49:40.472119   56531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 20:49:40.497991   56531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 20:49:40.523528   56531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 20:49:40.548863   56531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1216 20:49:40.582718   56531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 20:49:40.608389   56531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 20:49:40.634836   56531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 20:49:40.661381   56531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 20:49:40.687619   56531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem --> /usr/share/ca-certificates/14254.pem (1338 bytes)
	I1216 20:49:40.713124   56531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /usr/share/ca-certificates/142542.pem (1708 bytes)
	I1216 20:49:40.739606   56531 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 20:49:40.760923   56531 ssh_runner.go:195] Run: openssl version
	I1216 20:49:40.776674   56531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142542.pem && ln -fs /usr/share/ca-certificates/142542.pem /etc/ssl/certs/142542.pem"
	I1216 20:49:40.795440   56531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142542.pem
	I1216 20:49:40.804634   56531 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 19:42 /usr/share/ca-certificates/142542.pem
	I1216 20:49:40.804711   56531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142542.pem
	I1216 20:49:40.816878   56531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142542.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 20:49:40.830987   56531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 20:49:40.843838   56531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:49:40.848859   56531 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:49:40.848933   56531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:49:40.857201   56531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 20:49:40.869306   56531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14254.pem && ln -fs /usr/share/ca-certificates/14254.pem /etc/ssl/certs/14254.pem"
	I1216 20:49:40.881995   56531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14254.pem
	I1216 20:49:40.888201   56531 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 19:42 /usr/share/ca-certificates/14254.pem
	I1216 20:49:40.888286   56531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14254.pem
	I1216 20:49:40.895874   56531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14254.pem /etc/ssl/certs/51391683.0"
	I1216 20:49:40.912742   56531 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 20:49:40.917878   56531 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 20:49:40.917944   56531 kubeadm.go:392] StartCluster: {Name:old-k8s-version-847766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-847766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 20:49:40.918036   56531 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 20:49:40.918093   56531 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 20:49:40.963021   56531 cri.go:89] found id: ""
	I1216 20:49:40.963104   56531 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 20:49:40.974677   56531 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 20:49:40.985702   56531 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 20:49:40.996723   56531 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 20:49:40.996744   56531 kubeadm.go:157] found existing configuration files:
	
	I1216 20:49:40.996789   56531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 20:49:41.007593   56531 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 20:49:41.007664   56531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 20:49:41.018265   56531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 20:49:41.028526   56531 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 20:49:41.028591   56531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 20:49:41.038611   56531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 20:49:41.050726   56531 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 20:49:41.050806   56531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 20:49:41.060798   56531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 20:49:41.070487   56531 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 20:49:41.070557   56531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 20:49:41.081942   56531 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 20:49:41.378999   56531 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 20:49:39.762576   57211 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I1216 20:49:39.762636   57211 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
	I1216 20:49:39.762665   57211 cache.go:56] Caching tarball of preloaded images
	I1216 20:49:39.762814   57211 preload.go:172] Found /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 20:49:39.762838   57211 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on crio
	I1216 20:49:39.762989   57211 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/stopped-upgrade-976873/config.json ...
	I1216 20:49:39.818028   57211 start.go:360] acquireMachinesLock for stopped-upgrade-976873: {Name:mk014ce1133f8d018fee1f78c9c31a354da6dd77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 20:49:42.800764   57211 start.go:364] duration metric: took 2.98266012s to acquireMachinesLock for "stopped-upgrade-976873"
	I1216 20:49:42.800829   57211 start.go:96] Skipping create...Using existing machine configuration
	I1216 20:49:42.800838   57211 fix.go:54] fixHost starting: 
	I1216 20:49:42.801248   57211 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:49:42.801293   57211 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:49:42.822510   57211 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46465
	I1216 20:49:42.823034   57211 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:49:42.823563   57211 main.go:141] libmachine: Using API Version  1
	I1216 20:49:42.823589   57211 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:49:42.823930   57211 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:49:42.824127   57211 main.go:141] libmachine: (stopped-upgrade-976873) Calling .DriverName
	I1216 20:49:42.824273   57211 main.go:141] libmachine: (stopped-upgrade-976873) Calling .GetState
	I1216 20:49:42.825880   57211 fix.go:112] recreateIfNeeded on stopped-upgrade-976873: state=Stopped err=<nil>
	I1216 20:49:42.825911   57211 main.go:141] libmachine: (stopped-upgrade-976873) Calling .DriverName
	W1216 20:49:42.826080   57211 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 20:49:42.828075   57211 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-976873" ...
	I1216 20:49:42.829697   57211 main.go:141] libmachine: (stopped-upgrade-976873) Calling .Start
	I1216 20:49:42.829882   57211 main.go:141] libmachine: (stopped-upgrade-976873) Ensuring networks are active...
	I1216 20:49:42.830632   57211 main.go:141] libmachine: (stopped-upgrade-976873) Ensuring network default is active
	I1216 20:49:42.830970   57211 main.go:141] libmachine: (stopped-upgrade-976873) Ensuring network mk-stopped-upgrade-976873 is active
	I1216 20:49:42.831429   57211 main.go:141] libmachine: (stopped-upgrade-976873) Getting domain xml...
	I1216 20:49:42.832150   57211 main.go:141] libmachine: (stopped-upgrade-976873) Creating domain...
	I1216 20:49:42.057514   56772 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 20:49:42.091067   56772 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 20:49:42.091105   56772 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/addons for local assets ...
	I1216 20:49:42.091238   56772 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/files for local assets ...
	I1216 20:49:42.091393   56772 filesync.go:149] local asset: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem -> 142542.pem in /etc/ssl/certs
	I1216 20:49:42.091536   56772 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 20:49:42.262730   56772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /etc/ssl/certs/142542.pem (1708 bytes)
	I1216 20:49:42.450455   56772 start.go:296] duration metric: took 667.277858ms for postStartSetup
	I1216 20:49:42.450505   56772 fix.go:56] duration metric: took 10.509480243s for fixHost
	I1216 20:49:42.450531   56772 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHHostname
	I1216 20:49:42.454477   56772 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:49:42.455055   56772 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:f3:06", ip: ""} in network mk-kubernetes-upgrade-560677: {Iface:virbr2 ExpiryTime:2024-12-16 21:48:32 +0000 UTC Type:0 Mac:52:54:00:0a:f3:06 Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:kubernetes-upgrade-560677 Clientid:01:52:54:00:0a:f3:06}
	I1216 20:49:42.455083   56772 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined IP address 192.168.50.61 and MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:49:42.455334   56772 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHPort
	I1216 20:49:42.455599   56772 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHKeyPath
	I1216 20:49:42.455812   56772 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHKeyPath
	I1216 20:49:42.455964   56772 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHUsername
	I1216 20:49:42.456183   56772 main.go:141] libmachine: Using SSH client type: native
	I1216 20:49:42.456433   56772 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.61 22 <nil> <nil>}
	I1216 20:49:42.456458   56772 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 20:49:42.800580   56772 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734382182.755998061
	
	I1216 20:49:42.800612   56772 fix.go:216] guest clock: 1734382182.755998061
	I1216 20:49:42.800623   56772 fix.go:229] Guest: 2024-12-16 20:49:42.755998061 +0000 UTC Remote: 2024-12-16 20:49:42.45051013 +0000 UTC m=+45.620208795 (delta=305.487931ms)
	I1216 20:49:42.800649   56772 fix.go:200] guest clock delta is within tolerance: 305.487931ms
	I1216 20:49:42.800655   56772 start.go:83] releasing machines lock for "kubernetes-upgrade-560677", held for 10.859665504s
	I1216 20:49:42.800690   56772 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .DriverName
	I1216 20:49:42.801027   56772 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetIP
	I1216 20:49:42.804172   56772 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:49:42.804547   56772 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:f3:06", ip: ""} in network mk-kubernetes-upgrade-560677: {Iface:virbr2 ExpiryTime:2024-12-16 21:48:32 +0000 UTC Type:0 Mac:52:54:00:0a:f3:06 Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:kubernetes-upgrade-560677 Clientid:01:52:54:00:0a:f3:06}
	I1216 20:49:42.804575   56772 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined IP address 192.168.50.61 and MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:49:42.804795   56772 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .DriverName
	I1216 20:49:42.805326   56772 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .DriverName
	I1216 20:49:42.805494   56772 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .DriverName
	I1216 20:49:42.805588   56772 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 20:49:42.805640   56772 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHHostname
	I1216 20:49:42.805705   56772 ssh_runner.go:195] Run: cat /version.json
	I1216 20:49:42.805729   56772 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHHostname
	I1216 20:49:42.808563   56772 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:49:42.808622   56772 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:49:42.808921   56772 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:f3:06", ip: ""} in network mk-kubernetes-upgrade-560677: {Iface:virbr2 ExpiryTime:2024-12-16 21:48:32 +0000 UTC Type:0 Mac:52:54:00:0a:f3:06 Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:kubernetes-upgrade-560677 Clientid:01:52:54:00:0a:f3:06}
	I1216 20:49:42.808970   56772 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined IP address 192.168.50.61 and MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:49:42.809010   56772 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:f3:06", ip: ""} in network mk-kubernetes-upgrade-560677: {Iface:virbr2 ExpiryTime:2024-12-16 21:48:32 +0000 UTC Type:0 Mac:52:54:00:0a:f3:06 Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:kubernetes-upgrade-560677 Clientid:01:52:54:00:0a:f3:06}
	I1216 20:49:42.809034   56772 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined IP address 192.168.50.61 and MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:49:42.809154   56772 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHPort
	I1216 20:49:42.809215   56772 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHPort
	I1216 20:49:42.809326   56772 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHKeyPath
	I1216 20:49:42.809439   56772 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHKeyPath
	I1216 20:49:42.809524   56772 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHUsername
	I1216 20:49:42.809578   56772 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetSSHUsername
	I1216 20:49:42.809633   56772 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/kubernetes-upgrade-560677/id_rsa Username:docker}
	I1216 20:49:42.809760   56772 sshutil.go:53] new ssh client: &{IP:192.168.50.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/kubernetes-upgrade-560677/id_rsa Username:docker}
	I1216 20:49:43.166205   56772 ssh_runner.go:195] Run: systemctl --version
	I1216 20:49:43.188047   56772 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 20:49:43.396643   56772 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 20:49:43.411960   56772 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 20:49:43.412048   56772 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 20:49:43.430438   56772 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 20:49:43.430474   56772 start.go:495] detecting cgroup driver to use...
	I1216 20:49:43.430558   56772 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 20:49:43.456672   56772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 20:49:43.475442   56772 docker.go:217] disabling cri-docker service (if available) ...
	I1216 20:49:43.475512   56772 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 20:49:43.496515   56772 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 20:49:43.514432   56772 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 20:49:43.754179   56772 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 20:49:44.008450   56772 docker.go:233] disabling docker service ...
	I1216 20:49:44.008557   56772 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 20:49:44.065161   56772 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 20:49:44.120547   56772 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 20:49:44.392283   56772 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 20:49:44.613379   56772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 20:49:44.632326   56772 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 20:49:44.656429   56772 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1216 20:49:44.656497   56772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:49:44.677604   56772 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 20:49:44.677671   56772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:49:44.690349   56772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:49:44.703208   56772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:49:44.723938   56772 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 20:49:44.741258   56772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:49:44.756890   56772 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:49:44.778746   56772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:49:44.794166   56772 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 20:49:44.808578   56772 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 20:49:44.822513   56772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:49:45.016510   56772 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 20:49:44.300191   57211 main.go:141] libmachine: (stopped-upgrade-976873) Waiting to get IP...
	I1216 20:49:44.301367   57211 main.go:141] libmachine: (stopped-upgrade-976873) DBG | domain stopped-upgrade-976873 has defined MAC address 52:54:00:d1:85:80 in network mk-stopped-upgrade-976873
	I1216 20:49:44.301822   57211 main.go:141] libmachine: (stopped-upgrade-976873) DBG | unable to find current IP address of domain stopped-upgrade-976873 in network mk-stopped-upgrade-976873
	I1216 20:49:44.301928   57211 main.go:141] libmachine: (stopped-upgrade-976873) DBG | I1216 20:49:44.301819   57263 retry.go:31] will retry after 215.832277ms: waiting for machine to come up
	I1216 20:49:44.519724   57211 main.go:141] libmachine: (stopped-upgrade-976873) DBG | domain stopped-upgrade-976873 has defined MAC address 52:54:00:d1:85:80 in network mk-stopped-upgrade-976873
	I1216 20:49:44.520413   57211 main.go:141] libmachine: (stopped-upgrade-976873) DBG | unable to find current IP address of domain stopped-upgrade-976873 in network mk-stopped-upgrade-976873
	I1216 20:49:44.520459   57211 main.go:141] libmachine: (stopped-upgrade-976873) DBG | I1216 20:49:44.520354   57263 retry.go:31] will retry after 238.549591ms: waiting for machine to come up
	I1216 20:49:44.761103   57211 main.go:141] libmachine: (stopped-upgrade-976873) DBG | domain stopped-upgrade-976873 has defined MAC address 52:54:00:d1:85:80 in network mk-stopped-upgrade-976873
	I1216 20:49:44.761663   57211 main.go:141] libmachine: (stopped-upgrade-976873) DBG | unable to find current IP address of domain stopped-upgrade-976873 in network mk-stopped-upgrade-976873
	I1216 20:49:44.761690   57211 main.go:141] libmachine: (stopped-upgrade-976873) DBG | I1216 20:49:44.761614   57263 retry.go:31] will retry after 429.535036ms: waiting for machine to come up
	I1216 20:49:45.193138   57211 main.go:141] libmachine: (stopped-upgrade-976873) DBG | domain stopped-upgrade-976873 has defined MAC address 52:54:00:d1:85:80 in network mk-stopped-upgrade-976873
	I1216 20:49:45.193730   57211 main.go:141] libmachine: (stopped-upgrade-976873) DBG | unable to find current IP address of domain stopped-upgrade-976873 in network mk-stopped-upgrade-976873
	I1216 20:49:45.193760   57211 main.go:141] libmachine: (stopped-upgrade-976873) DBG | I1216 20:49:45.193681   57263 retry.go:31] will retry after 584.285869ms: waiting for machine to come up
	I1216 20:49:45.779487   57211 main.go:141] libmachine: (stopped-upgrade-976873) DBG | domain stopped-upgrade-976873 has defined MAC address 52:54:00:d1:85:80 in network mk-stopped-upgrade-976873
	I1216 20:49:45.780031   57211 main.go:141] libmachine: (stopped-upgrade-976873) DBG | unable to find current IP address of domain stopped-upgrade-976873 in network mk-stopped-upgrade-976873
	I1216 20:49:45.780062   57211 main.go:141] libmachine: (stopped-upgrade-976873) DBG | I1216 20:49:45.779963   57263 retry.go:31] will retry after 591.742701ms: waiting for machine to come up
	I1216 20:49:46.373706   57211 main.go:141] libmachine: (stopped-upgrade-976873) DBG | domain stopped-upgrade-976873 has defined MAC address 52:54:00:d1:85:80 in network mk-stopped-upgrade-976873
	I1216 20:49:46.374234   57211 main.go:141] libmachine: (stopped-upgrade-976873) DBG | unable to find current IP address of domain stopped-upgrade-976873 in network mk-stopped-upgrade-976873
	I1216 20:49:46.374256   57211 main.go:141] libmachine: (stopped-upgrade-976873) DBG | I1216 20:49:46.374192   57263 retry.go:31] will retry after 685.453905ms: waiting for machine to come up
	I1216 20:49:47.061032   57211 main.go:141] libmachine: (stopped-upgrade-976873) DBG | domain stopped-upgrade-976873 has defined MAC address 52:54:00:d1:85:80 in network mk-stopped-upgrade-976873
	I1216 20:49:47.061512   57211 main.go:141] libmachine: (stopped-upgrade-976873) DBG | unable to find current IP address of domain stopped-upgrade-976873 in network mk-stopped-upgrade-976873
	I1216 20:49:47.061537   57211 main.go:141] libmachine: (stopped-upgrade-976873) DBG | I1216 20:49:47.061470   57263 retry.go:31] will retry after 732.848988ms: waiting for machine to come up
	I1216 20:49:47.796078   57211 main.go:141] libmachine: (stopped-upgrade-976873) DBG | domain stopped-upgrade-976873 has defined MAC address 52:54:00:d1:85:80 in network mk-stopped-upgrade-976873
	I1216 20:49:47.796672   57211 main.go:141] libmachine: (stopped-upgrade-976873) DBG | unable to find current IP address of domain stopped-upgrade-976873 in network mk-stopped-upgrade-976873
	I1216 20:49:47.796701   57211 main.go:141] libmachine: (stopped-upgrade-976873) DBG | I1216 20:49:47.796628   57263 retry.go:31] will retry after 991.803248ms: waiting for machine to come up
	I1216 20:49:48.790390   57211 main.go:141] libmachine: (stopped-upgrade-976873) DBG | domain stopped-upgrade-976873 has defined MAC address 52:54:00:d1:85:80 in network mk-stopped-upgrade-976873
	I1216 20:49:48.791001   57211 main.go:141] libmachine: (stopped-upgrade-976873) DBG | unable to find current IP address of domain stopped-upgrade-976873 in network mk-stopped-upgrade-976873
	I1216 20:49:48.791027   57211 main.go:141] libmachine: (stopped-upgrade-976873) DBG | I1216 20:49:48.790947   57263 retry.go:31] will retry after 1.219124339s: waiting for machine to come up
	I1216 20:49:50.011332   57211 main.go:141] libmachine: (stopped-upgrade-976873) DBG | domain stopped-upgrade-976873 has defined MAC address 52:54:00:d1:85:80 in network mk-stopped-upgrade-976873
	I1216 20:49:50.011901   57211 main.go:141] libmachine: (stopped-upgrade-976873) DBG | unable to find current IP address of domain stopped-upgrade-976873 in network mk-stopped-upgrade-976873
	I1216 20:49:50.011935   57211 main.go:141] libmachine: (stopped-upgrade-976873) DBG | I1216 20:49:50.011885   57263 retry.go:31] will retry after 2.097522537s: waiting for machine to come up
	I1216 20:49:52.112349   57211 main.go:141] libmachine: (stopped-upgrade-976873) DBG | domain stopped-upgrade-976873 has defined MAC address 52:54:00:d1:85:80 in network mk-stopped-upgrade-976873
	I1216 20:49:52.112845   57211 main.go:141] libmachine: (stopped-upgrade-976873) DBG | unable to find current IP address of domain stopped-upgrade-976873 in network mk-stopped-upgrade-976873
	I1216 20:49:52.112869   57211 main.go:141] libmachine: (stopped-upgrade-976873) DBG | I1216 20:49:52.112807   57263 retry.go:31] will retry after 2.098758273s: waiting for machine to come up
	I1216 20:49:55.445890   56772 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.429336562s)
	I1216 20:49:55.445936   56772 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 20:49:55.445982   56772 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 20:49:55.451857   56772 start.go:563] Will wait 60s for crictl version
	I1216 20:49:55.451945   56772 ssh_runner.go:195] Run: which crictl
	I1216 20:49:55.456515   56772 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 20:49:55.505368   56772 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 20:49:55.505459   56772 ssh_runner.go:195] Run: crio --version
	I1216 20:49:55.536716   56772 ssh_runner.go:195] Run: crio --version
	I1216 20:49:55.571203   56772 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1216 20:49:55.572631   56772 main.go:141] libmachine: (kubernetes-upgrade-560677) Calling .GetIP
	I1216 20:49:55.575913   56772 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:49:55.576308   56772 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:f3:06", ip: ""} in network mk-kubernetes-upgrade-560677: {Iface:virbr2 ExpiryTime:2024-12-16 21:48:32 +0000 UTC Type:0 Mac:52:54:00:0a:f3:06 Iaid: IPaddr:192.168.50.61 Prefix:24 Hostname:kubernetes-upgrade-560677 Clientid:01:52:54:00:0a:f3:06}
	I1216 20:49:55.576346   56772 main.go:141] libmachine: (kubernetes-upgrade-560677) DBG | domain kubernetes-upgrade-560677 has defined IP address 192.168.50.61 and MAC address 52:54:00:0a:f3:06 in network mk-kubernetes-upgrade-560677
	I1216 20:49:55.576571   56772 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1216 20:49:55.581511   56772 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-560677 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.32.0 ClusterName:kubernetes-upgrade-560677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.61 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 20:49:55.581623   56772 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 20:49:55.581677   56772 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 20:49:55.626385   56772 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 20:49:55.626410   56772 crio.go:433] Images already preloaded, skipping extraction
	I1216 20:49:55.626458   56772 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 20:49:55.664757   56772 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 20:49:55.664782   56772 cache_images.go:84] Images are preloaded, skipping loading
	I1216 20:49:55.664791   56772 kubeadm.go:934] updating node { 192.168.50.61 8443 v1.32.0 crio true true} ...
	I1216 20:49:55.664918   56772 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-560677 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:kubernetes-upgrade-560677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 20:49:55.664998   56772 ssh_runner.go:195] Run: crio config
	I1216 20:49:55.714644   56772 cni.go:84] Creating CNI manager for ""
	I1216 20:49:55.714672   56772 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 20:49:55.714683   56772 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 20:49:55.714712   56772 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.61 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-560677 NodeName:kubernetes-upgrade-560677 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.61 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 20:49:55.714883   56772 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-560677"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.61"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.61"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 20:49:55.714954   56772 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1216 20:49:55.726215   56772 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 20:49:55.726284   56772 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 20:49:55.737345   56772 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1216 20:49:55.756716   56772 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 20:49:55.774707   56772 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2302 bytes)
	I1216 20:49:55.793498   56772 ssh_runner.go:195] Run: grep 192.168.50.61	control-plane.minikube.internal$ /etc/hosts
	I1216 20:49:55.798241   56772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:49:55.948006   56772 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 20:49:55.967583   56772 certs.go:68] Setting up /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/kubernetes-upgrade-560677 for IP: 192.168.50.61
	I1216 20:49:55.967609   56772 certs.go:194] generating shared ca certs ...
	I1216 20:49:55.967629   56772 certs.go:226] acquiring lock for ca certs: {Name:mk7f8f83a04be3d39897a025f51d4d8228b5a509 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:49:55.967810   56772 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key
	I1216 20:49:55.967866   56772 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key
	I1216 20:49:55.967880   56772 certs.go:256] generating profile certs ...
	I1216 20:49:55.967989   56772 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/kubernetes-upgrade-560677/client.key
	I1216 20:49:55.968064   56772 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/kubernetes-upgrade-560677/apiserver.key.9a37601c
	I1216 20:49:55.968117   56772 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/kubernetes-upgrade-560677/proxy-client.key
	I1216 20:49:55.968265   56772 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem (1338 bytes)
	W1216 20:49:55.968311   56772 certs.go:480] ignoring /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254_empty.pem, impossibly tiny 0 bytes
	I1216 20:49:55.968326   56772 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 20:49:55.968411   56772 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem (1082 bytes)
	I1216 20:49:55.968459   56772 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem (1123 bytes)
	I1216 20:49:55.968498   56772 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem (1679 bytes)
	I1216 20:49:55.968556   56772 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem (1708 bytes)
	I1216 20:49:55.969362   56772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 20:49:55.999110   56772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 20:49:56.027545   56772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 20:49:56.062720   56772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 20:49:56.091138   56772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/kubernetes-upgrade-560677/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1216 20:49:56.119321   56772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/kubernetes-upgrade-560677/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 20:49:56.146951   56772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/kubernetes-upgrade-560677/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 20:49:56.177919   56772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/kubernetes-upgrade-560677/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 20:49:56.207804   56772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem --> /usr/share/ca-certificates/14254.pem (1338 bytes)
	I1216 20:49:56.234937   56772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /usr/share/ca-certificates/142542.pem (1708 bytes)
	I1216 20:49:56.260998   56772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 20:49:56.286760   56772 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 20:49:56.305564   56772 ssh_runner.go:195] Run: openssl version
	I1216 20:49:56.312301   56772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 20:49:56.324446   56772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:49:56.329490   56772 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:49:56.329551   56772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:49:56.335645   56772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 20:49:56.346764   56772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14254.pem && ln -fs /usr/share/ca-certificates/14254.pem /etc/ssl/certs/14254.pem"
	I1216 20:49:56.359823   56772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14254.pem
	I1216 20:49:56.364692   56772 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 19:42 /usr/share/ca-certificates/14254.pem
	I1216 20:49:56.364764   56772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14254.pem
	I1216 20:49:56.371374   56772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14254.pem /etc/ssl/certs/51391683.0"
	I1216 20:49:56.381729   56772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142542.pem && ln -fs /usr/share/ca-certificates/142542.pem /etc/ssl/certs/142542.pem"
	I1216 20:49:56.394423   56772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142542.pem
	I1216 20:49:56.400006   56772 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 19:42 /usr/share/ca-certificates/142542.pem
	I1216 20:49:56.400104   56772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142542.pem
	I1216 20:49:56.406297   56772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142542.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 20:49:56.418299   56772 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 20:49:56.423651   56772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 20:49:56.429884   56772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 20:49:56.436126   56772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 20:49:56.442391   56772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 20:49:56.449392   56772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 20:49:56.455984   56772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 20:49:56.462024   56772 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-560677 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.32.0 ClusterName:kubernetes-upgrade-560677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.61 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 20:49:56.462141   56772 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 20:49:56.462193   56772 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 20:49:56.501710   56772 cri.go:89] found id: "59665016b607fad2e2d94ac8840cb2d5f0a24fd4fdffa55bbc54e8d6f1db18e6"
	I1216 20:49:56.501749   56772 cri.go:89] found id: "000ee1b5ab3cddcf83f5e96b1731fe530aa8b635c5c64f7da97faaea6472f540"
	I1216 20:49:56.501755   56772 cri.go:89] found id: "52841161ba18cf72b8230339cd1b20033f3342417dab565f47336135b3bc2ec9"
	I1216 20:49:56.501773   56772 cri.go:89] found id: "0af1318719964b63335764bd4b19a6144224e0e2b9ce350352d466c9109888ed"
	I1216 20:49:56.501777   56772 cri.go:89] found id: "3dd7be39e02fa3de312b3678e1663f9cf438c81261bdcc8bcc54a38bd877d3e1"
	I1216 20:49:56.501784   56772 cri.go:89] found id: "005361fd5a9faada983c7f252fda0bffe009787b37c9206b222e240911fe65e3"
	I1216 20:49:56.501788   56772 cri.go:89] found id: "c004e07263c7f6fbf5699f3c84816b6cffbe63461b7e7afd29fdcc114ce146d2"
	I1216 20:49:56.501792   56772 cri.go:89] found id: "0cd09b1b671a8c91d60af7608e1a23c0450f99ac57f9c56e61f9c9d8de106fda"
	I1216 20:49:56.501796   56772 cri.go:89] found id: "162f2c004ec7709984cda2e088c88e970f26e6a3de6fd34b684ee49f7055d5c0"
	I1216 20:49:56.501804   56772 cri.go:89] found id: "df890ec556a770c79e2a1e739a2f456fe9b1f5a5a01430791d22813eb65b31a7"
	I1216 20:49:56.501812   56772 cri.go:89] found id: "613415c8bd767fea869e9b896c6a628cc3d39508ba56ef1699570037099dedab"
	I1216 20:49:56.501816   56772 cri.go:89] found id: "511bd869198e777a11b498ea7671e061bfaefcde5f267f43f0b7b98c2ce87b85"
	I1216 20:49:56.501820   56772 cri.go:89] found id: "10fc954264f0215056bb83eebe9340c7faa52a3618bd7e7c3f5aeacbeab58cff"
	I1216 20:49:56.501825   56772 cri.go:89] found id: "9fc9c55fd5646fac4eed403bb10808ff31533b6369269eccebe453128927ebda"
	I1216 20:49:56.501840   56772 cri.go:89] found id: "d30a7494e86df55e0a5bb64c04ddd6dea51909896391d8c06aece28ab55ead37"
	I1216 20:49:56.501844   56772 cri.go:89] found id: ""
	I1216 20:49:56.501896   56772 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-560677 -n kubernetes-upgrade-560677
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-560677 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-560677" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-560677
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-560677: (1.134336518s)
--- FAIL: TestKubernetesUpgrade (455.60s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (50.34s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-022944 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-022944 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.222935142s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-022944] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20091
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20091-7083/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-7083/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-022944" primary control-plane node in "pause-022944" cluster
	* Updating the running kvm2 "pause-022944" VM ...
	* Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-022944" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 20:45:52.193401   51593 out.go:345] Setting OutFile to fd 1 ...
	I1216 20:45:52.193527   51593 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:45:52.193536   51593 out.go:358] Setting ErrFile to fd 2...
	I1216 20:45:52.193541   51593 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:45:52.193719   51593 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-7083/.minikube/bin
	I1216 20:45:52.194278   51593 out.go:352] Setting JSON to false
	I1216 20:45:52.195277   51593 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5297,"bootTime":1734376655,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 20:45:52.195412   51593 start.go:139] virtualization: kvm guest
	I1216 20:45:52.197847   51593 out.go:177] * [pause-022944] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1216 20:45:52.199697   51593 notify.go:220] Checking for updates...
	I1216 20:45:52.199711   51593 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 20:45:52.201435   51593 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 20:45:52.202887   51593 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 20:45:52.204416   51593 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-7083/.minikube
	I1216 20:45:52.205800   51593 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 20:45:52.207174   51593 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 20:45:52.209151   51593 config.go:182] Loaded profile config "pause-022944": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 20:45:52.209756   51593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 20:45:52.209822   51593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:45:52.226305   51593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36611
	I1216 20:45:52.226778   51593 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:45:52.227324   51593 main.go:141] libmachine: Using API Version  1
	I1216 20:45:52.227353   51593 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:45:52.227678   51593 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:45:52.227871   51593 main.go:141] libmachine: (pause-022944) Calling .DriverName
	I1216 20:45:52.228119   51593 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 20:45:52.228452   51593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 20:45:52.228496   51593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:45:52.243565   51593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44069
	I1216 20:45:52.244107   51593 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:45:52.244695   51593 main.go:141] libmachine: Using API Version  1
	I1216 20:45:52.244740   51593 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:45:52.245084   51593 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:45:52.245311   51593 main.go:141] libmachine: (pause-022944) Calling .DriverName
	I1216 20:45:52.284008   51593 out.go:177] * Using the kvm2 driver based on existing profile
	I1216 20:45:52.285497   51593 start.go:297] selected driver: kvm2
	I1216 20:45:52.285514   51593 start.go:901] validating driver "kvm2" against &{Name:pause-022944 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.32.0 ClusterName:pause-022944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.189 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 20:45:52.285710   51593 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 20:45:52.286052   51593 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 20:45:52.286160   51593 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20091-7083/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1216 20:45:52.303833   51593 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1216 20:45:52.304549   51593 cni.go:84] Creating CNI manager for ""
	I1216 20:45:52.304601   51593 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 20:45:52.304650   51593 start.go:340] cluster config:
	{Name:pause-022944 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:pause-022944 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.189 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false
registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 20:45:52.304778   51593 iso.go:125] acquiring lock: {Name:mk60ed2ba7ed00047edacd09f4f6bf84214f0831 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 20:45:52.307051   51593 out.go:177] * Starting "pause-022944" primary control-plane node in "pause-022944" cluster
	I1216 20:45:52.308410   51593 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 20:45:52.308464   51593 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1216 20:45:52.308477   51593 cache.go:56] Caching tarball of preloaded images
	I1216 20:45:52.308582   51593 preload.go:172] Found /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 20:45:52.308596   51593 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1216 20:45:52.308759   51593 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/pause-022944/config.json ...
	I1216 20:45:52.308975   51593 start.go:360] acquireMachinesLock for pause-022944: {Name:mk014ce1133f8d018fee1f78c9c31a354da6dd77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 20:45:52.309044   51593 start.go:364] duration metric: took 44.585µs to acquireMachinesLock for "pause-022944"
	I1216 20:45:52.309065   51593 start.go:96] Skipping create...Using existing machine configuration
	I1216 20:45:52.309071   51593 fix.go:54] fixHost starting: 
	I1216 20:45:52.309345   51593 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 20:45:52.309389   51593 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:45:52.324593   51593 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34249
	I1216 20:45:52.325071   51593 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:45:52.325583   51593 main.go:141] libmachine: Using API Version  1
	I1216 20:45:52.325607   51593 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:45:52.325932   51593 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:45:52.326125   51593 main.go:141] libmachine: (pause-022944) Calling .DriverName
	I1216 20:45:52.326255   51593 main.go:141] libmachine: (pause-022944) Calling .GetState
	I1216 20:45:52.328050   51593 fix.go:112] recreateIfNeeded on pause-022944: state=Running err=<nil>
	W1216 20:45:52.328087   51593 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 20:45:52.330207   51593 out.go:177] * Updating the running kvm2 "pause-022944" VM ...
	I1216 20:45:52.331784   51593 machine.go:93] provisionDockerMachine start ...
	I1216 20:45:52.331809   51593 main.go:141] libmachine: (pause-022944) Calling .DriverName
	I1216 20:45:52.332041   51593 main.go:141] libmachine: (pause-022944) Calling .GetSSHHostname
	I1216 20:45:52.334907   51593 main.go:141] libmachine: (pause-022944) DBG | domain pause-022944 has defined MAC address 52:54:00:0e:10:af in network mk-pause-022944
	I1216 20:45:52.335434   51593 main.go:141] libmachine: (pause-022944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:10:af", ip: ""} in network mk-pause-022944: {Iface:virbr4 ExpiryTime:2024-12-16 21:44:38 +0000 UTC Type:0 Mac:52:54:00:0e:10:af Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:pause-022944 Clientid:01:52:54:00:0e:10:af}
	I1216 20:45:52.335470   51593 main.go:141] libmachine: (pause-022944) DBG | domain pause-022944 has defined IP address 192.168.72.189 and MAC address 52:54:00:0e:10:af in network mk-pause-022944
	I1216 20:45:52.335616   51593 main.go:141] libmachine: (pause-022944) Calling .GetSSHPort
	I1216 20:45:52.335808   51593 main.go:141] libmachine: (pause-022944) Calling .GetSSHKeyPath
	I1216 20:45:52.335948   51593 main.go:141] libmachine: (pause-022944) Calling .GetSSHKeyPath
	I1216 20:45:52.336064   51593 main.go:141] libmachine: (pause-022944) Calling .GetSSHUsername
	I1216 20:45:52.336199   51593 main.go:141] libmachine: Using SSH client type: native
	I1216 20:45:52.336420   51593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.189 22 <nil> <nil>}
	I1216 20:45:52.336434   51593 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 20:45:52.456692   51593 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-022944
	
	I1216 20:45:52.456728   51593 main.go:141] libmachine: (pause-022944) Calling .GetMachineName
	I1216 20:45:52.456957   51593 buildroot.go:166] provisioning hostname "pause-022944"
	I1216 20:45:52.456983   51593 main.go:141] libmachine: (pause-022944) Calling .GetMachineName
	I1216 20:45:52.457182   51593 main.go:141] libmachine: (pause-022944) Calling .GetSSHHostname
	I1216 20:45:52.459617   51593 main.go:141] libmachine: (pause-022944) DBG | domain pause-022944 has defined MAC address 52:54:00:0e:10:af in network mk-pause-022944
	I1216 20:45:52.459973   51593 main.go:141] libmachine: (pause-022944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:10:af", ip: ""} in network mk-pause-022944: {Iface:virbr4 ExpiryTime:2024-12-16 21:44:38 +0000 UTC Type:0 Mac:52:54:00:0e:10:af Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:pause-022944 Clientid:01:52:54:00:0e:10:af}
	I1216 20:45:52.459998   51593 main.go:141] libmachine: (pause-022944) DBG | domain pause-022944 has defined IP address 192.168.72.189 and MAC address 52:54:00:0e:10:af in network mk-pause-022944
	I1216 20:45:52.460179   51593 main.go:141] libmachine: (pause-022944) Calling .GetSSHPort
	I1216 20:45:52.460344   51593 main.go:141] libmachine: (pause-022944) Calling .GetSSHKeyPath
	I1216 20:45:52.460508   51593 main.go:141] libmachine: (pause-022944) Calling .GetSSHKeyPath
	I1216 20:45:52.460608   51593 main.go:141] libmachine: (pause-022944) Calling .GetSSHUsername
	I1216 20:45:52.460763   51593 main.go:141] libmachine: Using SSH client type: native
	I1216 20:45:52.460928   51593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.189 22 <nil> <nil>}
	I1216 20:45:52.460940   51593 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-022944 && echo "pause-022944" | sudo tee /etc/hostname
	I1216 20:45:52.590919   51593 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-022944
	
	I1216 20:45:52.590949   51593 main.go:141] libmachine: (pause-022944) Calling .GetSSHHostname
	I1216 20:45:52.594154   51593 main.go:141] libmachine: (pause-022944) DBG | domain pause-022944 has defined MAC address 52:54:00:0e:10:af in network mk-pause-022944
	I1216 20:45:52.594521   51593 main.go:141] libmachine: (pause-022944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:10:af", ip: ""} in network mk-pause-022944: {Iface:virbr4 ExpiryTime:2024-12-16 21:44:38 +0000 UTC Type:0 Mac:52:54:00:0e:10:af Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:pause-022944 Clientid:01:52:54:00:0e:10:af}
	I1216 20:45:52.594552   51593 main.go:141] libmachine: (pause-022944) DBG | domain pause-022944 has defined IP address 192.168.72.189 and MAC address 52:54:00:0e:10:af in network mk-pause-022944
	I1216 20:45:52.594730   51593 main.go:141] libmachine: (pause-022944) Calling .GetSSHPort
	I1216 20:45:52.594934   51593 main.go:141] libmachine: (pause-022944) Calling .GetSSHKeyPath
	I1216 20:45:52.595103   51593 main.go:141] libmachine: (pause-022944) Calling .GetSSHKeyPath
	I1216 20:45:52.595276   51593 main.go:141] libmachine: (pause-022944) Calling .GetSSHUsername
	I1216 20:45:52.595433   51593 main.go:141] libmachine: Using SSH client type: native
	I1216 20:45:52.595603   51593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.189 22 <nil> <nil>}
	I1216 20:45:52.595617   51593 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-022944' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-022944/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-022944' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 20:45:52.718026   51593 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 20:45:52.718055   51593 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20091-7083/.minikube CaCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20091-7083/.minikube}
	I1216 20:45:52.718095   51593 buildroot.go:174] setting up certificates
	I1216 20:45:52.718105   51593 provision.go:84] configureAuth start
	I1216 20:45:52.718115   51593 main.go:141] libmachine: (pause-022944) Calling .GetMachineName
	I1216 20:45:52.718381   51593 main.go:141] libmachine: (pause-022944) Calling .GetIP
	I1216 20:45:52.721313   51593 main.go:141] libmachine: (pause-022944) DBG | domain pause-022944 has defined MAC address 52:54:00:0e:10:af in network mk-pause-022944
	I1216 20:45:52.721676   51593 main.go:141] libmachine: (pause-022944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:10:af", ip: ""} in network mk-pause-022944: {Iface:virbr4 ExpiryTime:2024-12-16 21:44:38 +0000 UTC Type:0 Mac:52:54:00:0e:10:af Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:pause-022944 Clientid:01:52:54:00:0e:10:af}
	I1216 20:45:52.721707   51593 main.go:141] libmachine: (pause-022944) DBG | domain pause-022944 has defined IP address 192.168.72.189 and MAC address 52:54:00:0e:10:af in network mk-pause-022944
	I1216 20:45:52.721841   51593 main.go:141] libmachine: (pause-022944) Calling .GetSSHHostname
	I1216 20:45:52.724574   51593 main.go:141] libmachine: (pause-022944) DBG | domain pause-022944 has defined MAC address 52:54:00:0e:10:af in network mk-pause-022944
	I1216 20:45:52.724941   51593 main.go:141] libmachine: (pause-022944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:10:af", ip: ""} in network mk-pause-022944: {Iface:virbr4 ExpiryTime:2024-12-16 21:44:38 +0000 UTC Type:0 Mac:52:54:00:0e:10:af Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:pause-022944 Clientid:01:52:54:00:0e:10:af}
	I1216 20:45:52.724988   51593 main.go:141] libmachine: (pause-022944) DBG | domain pause-022944 has defined IP address 192.168.72.189 and MAC address 52:54:00:0e:10:af in network mk-pause-022944
	I1216 20:45:52.725155   51593 provision.go:143] copyHostCerts
	I1216 20:45:52.725211   51593 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem, removing ...
	I1216 20:45:52.725222   51593 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem
	I1216 20:45:52.725291   51593 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem (1082 bytes)
	I1216 20:45:52.725407   51593 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem, removing ...
	I1216 20:45:52.725415   51593 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem
	I1216 20:45:52.725439   51593 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem (1123 bytes)
	I1216 20:45:52.725508   51593 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem, removing ...
	I1216 20:45:52.725516   51593 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem
	I1216 20:45:52.725534   51593 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem (1679 bytes)
	I1216 20:45:52.725588   51593 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem org=jenkins.pause-022944 san=[127.0.0.1 192.168.72.189 localhost minikube pause-022944]
	I1216 20:45:52.868612   51593 provision.go:177] copyRemoteCerts
	I1216 20:45:52.868671   51593 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 20:45:52.868694   51593 main.go:141] libmachine: (pause-022944) Calling .GetSSHHostname
	I1216 20:45:52.871296   51593 main.go:141] libmachine: (pause-022944) DBG | domain pause-022944 has defined MAC address 52:54:00:0e:10:af in network mk-pause-022944
	I1216 20:45:52.871648   51593 main.go:141] libmachine: (pause-022944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:10:af", ip: ""} in network mk-pause-022944: {Iface:virbr4 ExpiryTime:2024-12-16 21:44:38 +0000 UTC Type:0 Mac:52:54:00:0e:10:af Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:pause-022944 Clientid:01:52:54:00:0e:10:af}
	I1216 20:45:52.871681   51593 main.go:141] libmachine: (pause-022944) DBG | domain pause-022944 has defined IP address 192.168.72.189 and MAC address 52:54:00:0e:10:af in network mk-pause-022944
	I1216 20:45:52.871833   51593 main.go:141] libmachine: (pause-022944) Calling .GetSSHPort
	I1216 20:45:52.872072   51593 main.go:141] libmachine: (pause-022944) Calling .GetSSHKeyPath
	I1216 20:45:52.872228   51593 main.go:141] libmachine: (pause-022944) Calling .GetSSHUsername
	I1216 20:45:52.872372   51593 sshutil.go:53] new ssh client: &{IP:192.168.72.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/pause-022944/id_rsa Username:docker}
	I1216 20:45:52.962720   51593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 20:45:52.989321   51593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1216 20:45:53.016432   51593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 20:45:53.044949   51593 provision.go:87] duration metric: took 326.829576ms to configureAuth
	I1216 20:45:53.044983   51593 buildroot.go:189] setting minikube options for container-runtime
	I1216 20:45:53.045238   51593 config.go:182] Loaded profile config "pause-022944": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 20:45:53.045326   51593 main.go:141] libmachine: (pause-022944) Calling .GetSSHHostname
	I1216 20:45:53.048125   51593 main.go:141] libmachine: (pause-022944) DBG | domain pause-022944 has defined MAC address 52:54:00:0e:10:af in network mk-pause-022944
	I1216 20:45:53.048557   51593 main.go:141] libmachine: (pause-022944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:10:af", ip: ""} in network mk-pause-022944: {Iface:virbr4 ExpiryTime:2024-12-16 21:44:38 +0000 UTC Type:0 Mac:52:54:00:0e:10:af Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:pause-022944 Clientid:01:52:54:00:0e:10:af}
	I1216 20:45:53.048587   51593 main.go:141] libmachine: (pause-022944) DBG | domain pause-022944 has defined IP address 192.168.72.189 and MAC address 52:54:00:0e:10:af in network mk-pause-022944
	I1216 20:45:53.048789   51593 main.go:141] libmachine: (pause-022944) Calling .GetSSHPort
	I1216 20:45:53.048960   51593 main.go:141] libmachine: (pause-022944) Calling .GetSSHKeyPath
	I1216 20:45:53.049133   51593 main.go:141] libmachine: (pause-022944) Calling .GetSSHKeyPath
	I1216 20:45:53.049272   51593 main.go:141] libmachine: (pause-022944) Calling .GetSSHUsername
	I1216 20:45:53.049412   51593 main.go:141] libmachine: Using SSH client type: native
	I1216 20:45:53.049576   51593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.189 22 <nil> <nil>}
	I1216 20:45:53.049590   51593 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 20:45:58.598421   51593 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 20:45:58.598453   51593 machine.go:96] duration metric: took 6.266650685s to provisionDockerMachine
	I1216 20:45:58.598468   51593 start.go:293] postStartSetup for "pause-022944" (driver="kvm2")
	I1216 20:45:58.598482   51593 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 20:45:58.598508   51593 main.go:141] libmachine: (pause-022944) Calling .DriverName
	I1216 20:45:58.598866   51593 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 20:45:58.598900   51593 main.go:141] libmachine: (pause-022944) Calling .GetSSHHostname
	I1216 20:45:58.601587   51593 main.go:141] libmachine: (pause-022944) DBG | domain pause-022944 has defined MAC address 52:54:00:0e:10:af in network mk-pause-022944
	I1216 20:45:58.601940   51593 main.go:141] libmachine: (pause-022944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:10:af", ip: ""} in network mk-pause-022944: {Iface:virbr4 ExpiryTime:2024-12-16 21:44:38 +0000 UTC Type:0 Mac:52:54:00:0e:10:af Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:pause-022944 Clientid:01:52:54:00:0e:10:af}
	I1216 20:45:58.601961   51593 main.go:141] libmachine: (pause-022944) DBG | domain pause-022944 has defined IP address 192.168.72.189 and MAC address 52:54:00:0e:10:af in network mk-pause-022944
	I1216 20:45:58.602102   51593 main.go:141] libmachine: (pause-022944) Calling .GetSSHPort
	I1216 20:45:58.602299   51593 main.go:141] libmachine: (pause-022944) Calling .GetSSHKeyPath
	I1216 20:45:58.602472   51593 main.go:141] libmachine: (pause-022944) Calling .GetSSHUsername
	I1216 20:45:58.602630   51593 sshutil.go:53] new ssh client: &{IP:192.168.72.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/pause-022944/id_rsa Username:docker}
	I1216 20:45:58.692308   51593 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 20:45:58.697071   51593 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 20:45:58.697103   51593 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/addons for local assets ...
	I1216 20:45:58.697172   51593 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/files for local assets ...
	I1216 20:45:58.697259   51593 filesync.go:149] local asset: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem -> 142542.pem in /etc/ssl/certs
	I1216 20:45:58.697378   51593 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 20:45:58.709103   51593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /etc/ssl/certs/142542.pem (1708 bytes)
	I1216 20:45:58.735655   51593 start.go:296] duration metric: took 137.168728ms for postStartSetup
	I1216 20:45:58.735705   51593 fix.go:56] duration metric: took 6.426633214s for fixHost
	I1216 20:45:58.735730   51593 main.go:141] libmachine: (pause-022944) Calling .GetSSHHostname
	I1216 20:45:58.738496   51593 main.go:141] libmachine: (pause-022944) DBG | domain pause-022944 has defined MAC address 52:54:00:0e:10:af in network mk-pause-022944
	I1216 20:45:58.738803   51593 main.go:141] libmachine: (pause-022944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:10:af", ip: ""} in network mk-pause-022944: {Iface:virbr4 ExpiryTime:2024-12-16 21:44:38 +0000 UTC Type:0 Mac:52:54:00:0e:10:af Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:pause-022944 Clientid:01:52:54:00:0e:10:af}
	I1216 20:45:58.738843   51593 main.go:141] libmachine: (pause-022944) DBG | domain pause-022944 has defined IP address 192.168.72.189 and MAC address 52:54:00:0e:10:af in network mk-pause-022944
	I1216 20:45:58.738995   51593 main.go:141] libmachine: (pause-022944) Calling .GetSSHPort
	I1216 20:45:58.739175   51593 main.go:141] libmachine: (pause-022944) Calling .GetSSHKeyPath
	I1216 20:45:58.739377   51593 main.go:141] libmachine: (pause-022944) Calling .GetSSHKeyPath
	I1216 20:45:58.739495   51593 main.go:141] libmachine: (pause-022944) Calling .GetSSHUsername
	I1216 20:45:58.739655   51593 main.go:141] libmachine: Using SSH client type: native
	I1216 20:45:58.739830   51593 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.189 22 <nil> <nil>}
	I1216 20:45:58.739840   51593 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 20:45:58.856311   51593 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734381958.810775946
	
	I1216 20:45:58.856338   51593 fix.go:216] guest clock: 1734381958.810775946
	I1216 20:45:58.856349   51593 fix.go:229] Guest: 2024-12-16 20:45:58.810775946 +0000 UTC Remote: 2024-12-16 20:45:58.73571093 +0000 UTC m=+6.581310993 (delta=75.065016ms)
	I1216 20:45:58.856373   51593 fix.go:200] guest clock delta is within tolerance: 75.065016ms
	I1216 20:45:58.856381   51593 start.go:83] releasing machines lock for "pause-022944", held for 6.547324172s
	I1216 20:45:58.856398   51593 main.go:141] libmachine: (pause-022944) Calling .DriverName
	I1216 20:45:58.856664   51593 main.go:141] libmachine: (pause-022944) Calling .GetIP
	I1216 20:45:58.859363   51593 main.go:141] libmachine: (pause-022944) DBG | domain pause-022944 has defined MAC address 52:54:00:0e:10:af in network mk-pause-022944
	I1216 20:45:58.859709   51593 main.go:141] libmachine: (pause-022944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:10:af", ip: ""} in network mk-pause-022944: {Iface:virbr4 ExpiryTime:2024-12-16 21:44:38 +0000 UTC Type:0 Mac:52:54:00:0e:10:af Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:pause-022944 Clientid:01:52:54:00:0e:10:af}
	I1216 20:45:58.859739   51593 main.go:141] libmachine: (pause-022944) DBG | domain pause-022944 has defined IP address 192.168.72.189 and MAC address 52:54:00:0e:10:af in network mk-pause-022944
	I1216 20:45:58.859899   51593 main.go:141] libmachine: (pause-022944) Calling .DriverName
	I1216 20:45:58.860476   51593 main.go:141] libmachine: (pause-022944) Calling .DriverName
	I1216 20:45:58.860634   51593 main.go:141] libmachine: (pause-022944) Calling .DriverName
	I1216 20:45:58.860763   51593 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 20:45:58.860805   51593 main.go:141] libmachine: (pause-022944) Calling .GetSSHHostname
	I1216 20:45:58.860892   51593 ssh_runner.go:195] Run: cat /version.json
	I1216 20:45:58.860915   51593 main.go:141] libmachine: (pause-022944) Calling .GetSSHHostname
	I1216 20:45:58.863638   51593 main.go:141] libmachine: (pause-022944) DBG | domain pause-022944 has defined MAC address 52:54:00:0e:10:af in network mk-pause-022944
	I1216 20:45:58.863667   51593 main.go:141] libmachine: (pause-022944) DBG | domain pause-022944 has defined MAC address 52:54:00:0e:10:af in network mk-pause-022944
	I1216 20:45:58.864082   51593 main.go:141] libmachine: (pause-022944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:10:af", ip: ""} in network mk-pause-022944: {Iface:virbr4 ExpiryTime:2024-12-16 21:44:38 +0000 UTC Type:0 Mac:52:54:00:0e:10:af Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:pause-022944 Clientid:01:52:54:00:0e:10:af}
	I1216 20:45:58.864124   51593 main.go:141] libmachine: (pause-022944) DBG | domain pause-022944 has defined IP address 192.168.72.189 and MAC address 52:54:00:0e:10:af in network mk-pause-022944
	I1216 20:45:58.864154   51593 main.go:141] libmachine: (pause-022944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:10:af", ip: ""} in network mk-pause-022944: {Iface:virbr4 ExpiryTime:2024-12-16 21:44:38 +0000 UTC Type:0 Mac:52:54:00:0e:10:af Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:pause-022944 Clientid:01:52:54:00:0e:10:af}
	I1216 20:45:58.864169   51593 main.go:141] libmachine: (pause-022944) DBG | domain pause-022944 has defined IP address 192.168.72.189 and MAC address 52:54:00:0e:10:af in network mk-pause-022944
	I1216 20:45:58.864438   51593 main.go:141] libmachine: (pause-022944) Calling .GetSSHPort
	I1216 20:45:58.864527   51593 main.go:141] libmachine: (pause-022944) Calling .GetSSHPort
	I1216 20:45:58.864597   51593 main.go:141] libmachine: (pause-022944) Calling .GetSSHKeyPath
	I1216 20:45:58.864674   51593 main.go:141] libmachine: (pause-022944) Calling .GetSSHKeyPath
	I1216 20:45:58.864726   51593 main.go:141] libmachine: (pause-022944) Calling .GetSSHUsername
	I1216 20:45:58.864815   51593 main.go:141] libmachine: (pause-022944) Calling .GetSSHUsername
	I1216 20:45:58.864880   51593 sshutil.go:53] new ssh client: &{IP:192.168.72.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/pause-022944/id_rsa Username:docker}
	I1216 20:45:58.864921   51593 sshutil.go:53] new ssh client: &{IP:192.168.72.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/pause-022944/id_rsa Username:docker}
	I1216 20:45:58.971896   51593 ssh_runner.go:195] Run: systemctl --version
	I1216 20:45:58.978888   51593 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 20:45:59.140141   51593 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 20:45:59.147696   51593 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 20:45:59.147760   51593 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 20:45:59.158332   51593 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 20:45:59.158357   51593 start.go:495] detecting cgroup driver to use...
	I1216 20:45:59.158413   51593 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 20:45:59.176867   51593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 20:45:59.192636   51593 docker.go:217] disabling cri-docker service (if available) ...
	I1216 20:45:59.192690   51593 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 20:45:59.208352   51593 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 20:45:59.223101   51593 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 20:45:59.385459   51593 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 20:45:59.523385   51593 docker.go:233] disabling docker service ...
	I1216 20:45:59.523454   51593 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 20:45:59.542308   51593 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 20:45:59.560168   51593 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 20:45:59.716147   51593 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 20:45:59.850031   51593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 20:45:59.866086   51593 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 20:45:59.889059   51593 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1216 20:45:59.889116   51593 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:45:59.900880   51593 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 20:45:59.900954   51593 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:45:59.912417   51593 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:45:59.923820   51593 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:45:59.935018   51593 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 20:45:59.949589   51593 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:45:59.960510   51593 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:45:59.972863   51593 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:45:59.984162   51593 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 20:45:59.994868   51593 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 20:46:00.005366   51593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:46:00.149282   51593 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 20:46:05.837757   51593 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.68843619s)
	I1216 20:46:05.837795   51593 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 20:46:05.837848   51593 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 20:46:05.845744   51593 start.go:563] Will wait 60s for crictl version
	I1216 20:46:05.845810   51593 ssh_runner.go:195] Run: which crictl
	I1216 20:46:05.851254   51593 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 20:46:05.890817   51593 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 20:46:05.890900   51593 ssh_runner.go:195] Run: crio --version
	I1216 20:46:05.930200   51593 ssh_runner.go:195] Run: crio --version
	I1216 20:46:05.968302   51593 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1216 20:46:05.969679   51593 main.go:141] libmachine: (pause-022944) Calling .GetIP
	I1216 20:46:05.972973   51593 main.go:141] libmachine: (pause-022944) DBG | domain pause-022944 has defined MAC address 52:54:00:0e:10:af in network mk-pause-022944
	I1216 20:46:05.973326   51593 main.go:141] libmachine: (pause-022944) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0e:10:af", ip: ""} in network mk-pause-022944: {Iface:virbr4 ExpiryTime:2024-12-16 21:44:38 +0000 UTC Type:0 Mac:52:54:00:0e:10:af Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:pause-022944 Clientid:01:52:54:00:0e:10:af}
	I1216 20:46:05.973355   51593 main.go:141] libmachine: (pause-022944) DBG | domain pause-022944 has defined IP address 192.168.72.189 and MAC address 52:54:00:0e:10:af in network mk-pause-022944
	I1216 20:46:05.973611   51593 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1216 20:46:05.978669   51593 kubeadm.go:883] updating cluster {Name:pause-022944 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0
ClusterName:pause-022944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.189 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-p
lugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 20:46:05.978789   51593 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 20:46:05.978832   51593 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 20:46:06.040915   51593 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 20:46:06.040942   51593 crio.go:433] Images already preloaded, skipping extraction
	I1216 20:46:06.041003   51593 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 20:46:06.083523   51593 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 20:46:06.083548   51593 cache_images.go:84] Images are preloaded, skipping loading
	I1216 20:46:06.083557   51593 kubeadm.go:934] updating node { 192.168.72.189 8443 v1.32.0 crio true true} ...
	I1216 20:46:06.083667   51593 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-022944 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.189
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:pause-022944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 20:46:06.083753   51593 ssh_runner.go:195] Run: crio config
	I1216 20:46:06.138189   51593 cni.go:84] Creating CNI manager for ""
	I1216 20:46:06.138215   51593 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 20:46:06.138228   51593 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 20:46:06.138254   51593 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.189 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-022944 NodeName:pause-022944 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.189"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.189 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 20:46:06.138409   51593 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.189
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-022944"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.189"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.189"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 20:46:06.138480   51593 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1216 20:46:06.149783   51593 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 20:46:06.149886   51593 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 20:46:06.161069   51593 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1216 20:46:06.181794   51593 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 20:46:06.201346   51593 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I1216 20:46:06.219837   51593 ssh_runner.go:195] Run: grep 192.168.72.189	control-plane.minikube.internal$ /etc/hosts
	I1216 20:46:06.224376   51593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:46:06.385836   51593 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 20:46:06.409614   51593 certs.go:68] Setting up /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/pause-022944 for IP: 192.168.72.189
	I1216 20:46:06.409639   51593 certs.go:194] generating shared ca certs ...
	I1216 20:46:06.409659   51593 certs.go:226] acquiring lock for ca certs: {Name:mk7f8f83a04be3d39897a025f51d4d8228b5a509 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:46:06.409831   51593 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key
	I1216 20:46:06.409903   51593 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key
	I1216 20:46:06.409918   51593 certs.go:256] generating profile certs ...
	I1216 20:46:06.410020   51593 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/pause-022944/client.key
	I1216 20:46:06.410102   51593 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/pause-022944/apiserver.key.10cf45f7
	I1216 20:46:06.410156   51593 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/pause-022944/proxy-client.key
	I1216 20:46:06.410305   51593 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem (1338 bytes)
	W1216 20:46:06.410348   51593 certs.go:480] ignoring /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254_empty.pem, impossibly tiny 0 bytes
	I1216 20:46:06.410367   51593 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 20:46:06.410397   51593 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem (1082 bytes)
	I1216 20:46:06.410425   51593 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem (1123 bytes)
	I1216 20:46:06.410454   51593 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem (1679 bytes)
	I1216 20:46:06.410518   51593 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem (1708 bytes)
	I1216 20:46:06.411183   51593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 20:46:06.449361   51593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 20:46:06.485250   51593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 20:46:06.518330   51593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 20:46:06.550236   51593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/pause-022944/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 20:46:06.641158   51593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/pause-022944/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 20:46:06.709058   51593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/pause-022944/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 20:46:06.927827   51593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/pause-022944/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 20:46:07.172649   51593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem --> /usr/share/ca-certificates/14254.pem (1338 bytes)
	I1216 20:46:07.348731   51593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /usr/share/ca-certificates/142542.pem (1708 bytes)
	I1216 20:46:07.462179   51593 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 20:46:07.523909   51593 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 20:46:07.665946   51593 ssh_runner.go:195] Run: openssl version
	I1216 20:46:07.717016   51593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142542.pem && ln -fs /usr/share/ca-certificates/142542.pem /etc/ssl/certs/142542.pem"
	I1216 20:46:07.751424   51593 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142542.pem
	I1216 20:46:07.762134   51593 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 19:42 /usr/share/ca-certificates/142542.pem
	I1216 20:46:07.762211   51593 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142542.pem
	I1216 20:46:07.776155   51593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142542.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 20:46:07.794770   51593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 20:46:07.811808   51593 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:46:07.817550   51593 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:46:07.817617   51593 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:46:07.827640   51593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 20:46:07.841574   51593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14254.pem && ln -fs /usr/share/ca-certificates/14254.pem /etc/ssl/certs/14254.pem"
	I1216 20:46:07.857277   51593 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14254.pem
	I1216 20:46:07.865050   51593 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 19:42 /usr/share/ca-certificates/14254.pem
	I1216 20:46:07.865130   51593 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14254.pem
	I1216 20:46:07.872302   51593 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14254.pem /etc/ssl/certs/51391683.0"
	I1216 20:46:07.886011   51593 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 20:46:07.893240   51593 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 20:46:07.900474   51593 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 20:46:07.907619   51593 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 20:46:07.914469   51593 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 20:46:07.923299   51593 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 20:46:07.938102   51593 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 20:46:07.949086   51593 kubeadm.go:392] StartCluster: {Name:pause-022944 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 Cl
usterName:pause-022944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.189 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plug
in:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 20:46:07.949244   51593 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 20:46:07.949330   51593 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 20:46:08.021896   51593 cri.go:89] found id: "3f92e6114c6560a93d90ee5508a764de4ba7821079c1f9681d049a5138837ea7"
	I1216 20:46:08.021920   51593 cri.go:89] found id: "4d9ac0f1ddcb695e316df090a4f64ea07294b5a9c21536e57422030b7ae06a6c"
	I1216 20:46:08.021926   51593 cri.go:89] found id: "c482cafbd8f9c2cfef74a63eb751850fb18dd35c0440d01c46655444ca1521cb"
	I1216 20:46:08.021933   51593 cri.go:89] found id: "ee546aaecd2dd7603a85b67d467058f021900a40a5fa269124e39a6391103c98"
	I1216 20:46:08.021938   51593 cri.go:89] found id: "30617acf2d2801c7042c99e0465810bb69557a04c41a3c32ea43501e24cb939c"
	I1216 20:46:08.021942   51593 cri.go:89] found id: "5dbb5fcf4e0f39c2e089d4ea0db7705b4166dd0ea1dd790a9a8ffefd961b6d07"
	I1216 20:46:08.021946   51593 cri.go:89] found id: "94dc43da0589de8cd1598df906befe173f46d0b96683746cb9a214542bc793d1"
	I1216 20:46:08.021950   51593 cri.go:89] found id: "7bb3706cfb6a2ea4f87fc737974a578e06135635ba769495b353700eb0b49b80"
	I1216 20:46:08.021956   51593 cri.go:89] found id: "b9811d3c3291c60d0a922c3e4f32404d61a2664fd0bfe30a2f0b9bffd0d8ffef"
	I1216 20:46:08.021963   51593 cri.go:89] found id: "163896f136f83e4f7ea8163cd5d9e668273d88ea9d87d1ffe4e119fb03a94685"
	I1216 20:46:08.021967   51593 cri.go:89] found id: ""
	I1216 20:46:08.022014   51593 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-022944 -n pause-022944
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-022944 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-022944 logs -n 25: (1.44095465s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-647112 sudo cat                            | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p cilium-647112 sudo cat                            | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p cilium-647112 sudo                                | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-647112 sudo                                | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-647112 sudo cat                            | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-647112 sudo docker                         | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-647112 sudo                                | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-647112 sudo                                | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-647112 sudo cat                            | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-647112 sudo cat                            | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-647112 sudo                                | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-647112 sudo                                | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-647112 sudo                                | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-647112 sudo cat                            | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-647112 sudo cat                            | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-647112 sudo                                | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-647112 sudo                                | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-647112 sudo                                | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-647112 sudo find                           | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-647112 sudo crio                           | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-647112                                     | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC | 16 Dec 24 20:46 UTC |
	| ssh     | -p NoKubernetes-545724 sudo                          | NoKubernetes-545724       | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | systemctl is-active --quiet                          |                           |         |         |                     |                     |
	|         | service kubelet                                      |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-545724                               | NoKubernetes-545724       | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC | 16 Dec 24 20:46 UTC |
	| start   | -p force-systemd-env-893512                          | force-systemd-env-893512  | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p force-systemd-flag-406516                         | force-systemd-flag-406516 | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | --memory=2048 --force-systemd                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 20:46:32
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 20:46:32.530534   54398 out.go:345] Setting OutFile to fd 1 ...
	I1216 20:46:32.530650   54398 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:46:32.530657   54398 out.go:358] Setting ErrFile to fd 2...
	I1216 20:46:32.530661   54398 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:46:32.530913   54398 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-7083/.minikube/bin
	I1216 20:46:32.531665   54398 out.go:352] Setting JSON to false
	I1216 20:46:32.532701   54398 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5338,"bootTime":1734376655,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 20:46:32.532834   54398 start.go:139] virtualization: kvm guest
	I1216 20:46:32.535210   54398 out.go:177] * [force-systemd-flag-406516] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1216 20:46:32.536763   54398 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 20:46:32.536776   54398 notify.go:220] Checking for updates...
	I1216 20:46:32.539480   54398 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 20:46:32.540914   54398 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 20:46:32.542477   54398 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-7083/.minikube
	I1216 20:46:32.544007   54398 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 20:46:32.545628   54398 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 20:46:32.547435   54398 config.go:182] Loaded profile config "kubernetes-upgrade-560677": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1216 20:46:32.547573   54398 config.go:182] Loaded profile config "pause-022944": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 20:46:32.547685   54398 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 20:46:32.582301   54398 out.go:177] * Using the kvm2 driver based on user configuration
	I1216 20:46:32.504657   54373 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.34.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.34.0/docker-machine-driver-kvm2-amd64.sha256 -> /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:46:33.788889   54373 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 20:46:33.789180   54373 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 20:46:33.789217   54373 cni.go:84] Creating CNI manager for ""
	I1216 20:46:33.789255   54373 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 20:46:33.789266   54373 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 20:46:33.789324   54373 start.go:340] cluster config:
	{Name:force-systemd-env-893512 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:force-systemd-env-893512 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 20:46:33.789454   54373 iso.go:125] acquiring lock: {Name:mk60ed2ba7ed00047edacd09f4f6bf84214f0831 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 20:46:33.791821   54373 out.go:177] * Starting "force-systemd-env-893512" primary control-plane node in "force-systemd-env-893512" cluster
	I1216 20:46:33.793288   54373 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 20:46:33.793331   54373 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1216 20:46:33.793341   54373 cache.go:56] Caching tarball of preloaded images
	I1216 20:46:33.793435   54373 preload.go:172] Found /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 20:46:33.793447   54373 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1216 20:46:33.793542   54373 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/force-systemd-env-893512/config.json ...
	I1216 20:46:33.793559   54373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/force-systemd-env-893512/config.json: {Name:mka0d3de6c5ec33a2b9026b545de211f5b96bb45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:46:33.793717   54373 start.go:360] acquireMachinesLock for force-systemd-env-893512: {Name:mk014ce1133f8d018fee1f78c9c31a354da6dd77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 20:46:33.793770   54373 start.go:364] duration metric: took 28.08µs to acquireMachinesLock for "force-systemd-env-893512"
	I1216 20:46:33.793798   54373 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-893512 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.32.0 ClusterName:force-systemd-env-893512 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 20:46:33.793883   54373 start.go:125] createHost starting for "" (driver="kvm2")
	I1216 20:46:32.583637   54398 start.go:297] selected driver: kvm2
	I1216 20:46:32.583657   54398 start.go:901] validating driver "kvm2" against <nil>
	I1216 20:46:32.583670   54398 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 20:46:32.584429   54398 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 20:46:33.788985   54398 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20091-7083/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1216 20:46:33.807111   54398 install.go:137] /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1216 20:46:33.807159   54398 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 20:46:33.807458   54398 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 20:46:33.807491   54398 cni.go:84] Creating CNI manager for ""
	I1216 20:46:33.807541   54398 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 20:46:33.807554   54398 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 20:46:33.807607   54398 start.go:340] cluster config:
	{Name:force-systemd-flag-406516 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:force-systemd-flag-406516 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 20:46:33.807740   54398 iso.go:125] acquiring lock: {Name:mk60ed2ba7ed00047edacd09f4f6bf84214f0831 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 20:46:33.809737   54398 out.go:177] * Starting "force-systemd-flag-406516" primary control-plane node in "force-systemd-flag-406516" cluster
	I1216 20:46:33.700440   51593 pod_ready.go:103] pod "etcd-pause-022944" in "kube-system" namespace has status "Ready":"False"
	I1216 20:46:34.692633   51593 pod_ready.go:93] pod "etcd-pause-022944" in "kube-system" namespace has status "Ready":"True"
	I1216 20:46:34.692660   51593 pod_ready.go:82] duration metric: took 14.505609964s for pod "etcd-pause-022944" in "kube-system" namespace to be "Ready" ...
	I1216 20:46:34.692671   51593 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-022944" in "kube-system" namespace to be "Ready" ...
	I1216 20:46:34.696827   51593 pod_ready.go:93] pod "kube-apiserver-pause-022944" in "kube-system" namespace has status "Ready":"True"
	I1216 20:46:34.696847   51593 pod_ready.go:82] duration metric: took 4.170576ms for pod "kube-apiserver-pause-022944" in "kube-system" namespace to be "Ready" ...
	I1216 20:46:34.696856   51593 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-022944" in "kube-system" namespace to be "Ready" ...
	I1216 20:46:34.700774   51593 pod_ready.go:93] pod "kube-controller-manager-pause-022944" in "kube-system" namespace has status "Ready":"True"
	I1216 20:46:34.700805   51593 pod_ready.go:82] duration metric: took 3.942829ms for pod "kube-controller-manager-pause-022944" in "kube-system" namespace to be "Ready" ...
	I1216 20:46:34.700819   51593 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-lr8m7" in "kube-system" namespace to be "Ready" ...
	I1216 20:46:34.705799   51593 pod_ready.go:93] pod "kube-proxy-lr8m7" in "kube-system" namespace has status "Ready":"True"
	I1216 20:46:34.705825   51593 pod_ready.go:82] duration metric: took 4.998729ms for pod "kube-proxy-lr8m7" in "kube-system" namespace to be "Ready" ...
	I1216 20:46:34.705838   51593 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-022944" in "kube-system" namespace to be "Ready" ...
	I1216 20:46:34.711227   51593 pod_ready.go:93] pod "kube-scheduler-pause-022944" in "kube-system" namespace has status "Ready":"True"
	I1216 20:46:34.711266   51593 pod_ready.go:82] duration metric: took 5.419704ms for pod "kube-scheduler-pause-022944" in "kube-system" namespace to be "Ready" ...
	I1216 20:46:34.711277   51593 pod_ready.go:39] duration metric: took 14.534820704s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 20:46:34.711297   51593 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 20:46:34.725505   51593 ops.go:34] apiserver oom_adj: -16
	I1216 20:46:34.725543   51593 kubeadm.go:597] duration metric: took 26.57521221s to restartPrimaryControlPlane
	I1216 20:46:34.725556   51593 kubeadm.go:394] duration metric: took 26.776482005s to StartCluster
	I1216 20:46:34.725574   51593 settings.go:142] acquiring lock: {Name:mke62e1d1fa6bfae09410847a3fc6f95d0bbbd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:46:34.725636   51593 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 20:46:34.726295   51593 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/kubeconfig: {Name:mk67073c6dc9abd712825d4490d6430745897f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:46:34.726513   51593 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.189 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 20:46:34.726623   51593 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 20:46:34.726790   51593 config.go:182] Loaded profile config "pause-022944": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 20:46:34.729404   51593 out.go:177] * Verifying Kubernetes components...
	I1216 20:46:34.729401   51593 out.go:177] * Enabled addons: 
	I1216 20:46:33.795742   54373 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1216 20:46:33.795893   54373 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:46:33.795934   54373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:46:33.812578   54373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33513
	I1216 20:46:33.813128   54373 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:46:33.813738   54373 main.go:141] libmachine: Using API Version  1
	I1216 20:46:33.813772   54373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:46:33.814201   54373 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:46:33.814420   54373 main.go:141] libmachine: (force-systemd-env-893512) Calling .GetMachineName
	I1216 20:46:33.814635   54373 main.go:141] libmachine: (force-systemd-env-893512) Calling .DriverName
	I1216 20:46:33.814775   54373 start.go:159] libmachine.API.Create for "force-systemd-env-893512" (driver="kvm2")
	I1216 20:46:33.814802   54373 client.go:168] LocalClient.Create starting
	I1216 20:46:33.814838   54373 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem
	I1216 20:46:33.814882   54373 main.go:141] libmachine: Decoding PEM data...
	I1216 20:46:33.814905   54373 main.go:141] libmachine: Parsing certificate...
	I1216 20:46:33.814966   54373 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem
	I1216 20:46:33.814998   54373 main.go:141] libmachine: Decoding PEM data...
	I1216 20:46:33.815015   54373 main.go:141] libmachine: Parsing certificate...
	I1216 20:46:33.815051   54373 main.go:141] libmachine: Running pre-create checks...
	I1216 20:46:33.815064   54373 main.go:141] libmachine: (force-systemd-env-893512) Calling .PreCreateCheck
	I1216 20:46:33.815473   54373 main.go:141] libmachine: (force-systemd-env-893512) Calling .GetConfigRaw
	I1216 20:46:33.815880   54373 main.go:141] libmachine: Creating machine...
	I1216 20:46:33.815893   54373 main.go:141] libmachine: (force-systemd-env-893512) Calling .Create
	I1216 20:46:33.816026   54373 main.go:141] libmachine: (force-systemd-env-893512) Creating KVM machine...
	I1216 20:46:33.817249   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | found existing default KVM network
	I1216 20:46:33.818760   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | I1216 20:46:33.818575   54425 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f5f0}
	I1216 20:46:33.818787   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | created network xml: 
	I1216 20:46:33.818798   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | <network>
	I1216 20:46:33.818807   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG |   <name>mk-force-systemd-env-893512</name>
	I1216 20:46:33.818826   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG |   <dns enable='no'/>
	I1216 20:46:33.818837   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG |   
	I1216 20:46:33.818851   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1216 20:46:33.818861   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG |     <dhcp>
	I1216 20:46:33.818875   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1216 20:46:33.818883   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG |     </dhcp>
	I1216 20:46:33.818918   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG |   </ip>
	I1216 20:46:33.818932   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG |   
	I1216 20:46:33.818944   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | </network>
	I1216 20:46:33.818954   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | 
	I1216 20:46:33.824388   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | trying to create private KVM network mk-force-systemd-env-893512 192.168.39.0/24...
	I1216 20:46:33.898632   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | private KVM network mk-force-systemd-env-893512 192.168.39.0/24 created
	I1216 20:46:33.898669   54373 main.go:141] libmachine: (force-systemd-env-893512) Setting up store path in /home/jenkins/minikube-integration/20091-7083/.minikube/machines/force-systemd-env-893512 ...
	I1216 20:46:33.898683   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | I1216 20:46:33.898612   54425 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/20091-7083/.minikube
	I1216 20:46:33.898697   54373 main.go:141] libmachine: (force-systemd-env-893512) Building disk image from file:///home/jenkins/minikube-integration/20091-7083/.minikube/cache/iso/amd64/minikube-v1.34.0-1734029574-20090-amd64.iso
	I1216 20:46:33.898847   54373 main.go:141] libmachine: (force-systemd-env-893512) Downloading /home/jenkins/minikube-integration/20091-7083/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20091-7083/.minikube/cache/iso/amd64/minikube-v1.34.0-1734029574-20090-amd64.iso...
	I1216 20:46:34.144443   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | I1216 20:46:34.144273   54425 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/force-systemd-env-893512/id_rsa...
	I1216 20:46:34.238180   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | I1216 20:46:34.238038   54425 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/force-systemd-env-893512/force-systemd-env-893512.rawdisk...
	I1216 20:46:34.238214   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | Writing magic tar header
	I1216 20:46:34.238232   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | Writing SSH key tar header
	I1216 20:46:34.238245   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | I1216 20:46:34.238164   54425 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/20091-7083/.minikube/machines/force-systemd-env-893512 ...
	I1216 20:46:34.238260   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/force-systemd-env-893512
	I1216 20:46:34.238327   54373 main.go:141] libmachine: (force-systemd-env-893512) Setting executable bit set on /home/jenkins/minikube-integration/20091-7083/.minikube/machines/force-systemd-env-893512 (perms=drwx------)
	I1216 20:46:34.238356   54373 main.go:141] libmachine: (force-systemd-env-893512) Setting executable bit set on /home/jenkins/minikube-integration/20091-7083/.minikube/machines (perms=drwxr-xr-x)
	I1216 20:46:34.238372   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20091-7083/.minikube/machines
	I1216 20:46:34.238388   54373 main.go:141] libmachine: (force-systemd-env-893512) Setting executable bit set on /home/jenkins/minikube-integration/20091-7083/.minikube (perms=drwxr-xr-x)
	I1216 20:46:34.238406   54373 main.go:141] libmachine: (force-systemd-env-893512) Setting executable bit set on /home/jenkins/minikube-integration/20091-7083 (perms=drwxrwxr-x)
	I1216 20:46:34.238421   54373 main.go:141] libmachine: (force-systemd-env-893512) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1216 20:46:34.238434   54373 main.go:141] libmachine: (force-systemd-env-893512) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1216 20:46:34.238450   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20091-7083/.minikube
	I1216 20:46:34.238477   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20091-7083
	I1216 20:46:34.238489   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1216 20:46:34.238503   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | Checking permissions on dir: /home/jenkins
	I1216 20:46:34.238515   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | Checking permissions on dir: /home
	I1216 20:46:34.238528   54373 main.go:141] libmachine: (force-systemd-env-893512) Creating domain...
	I1216 20:46:34.238541   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | Skipping /home - not owner
	I1216 20:46:34.239667   54373 main.go:141] libmachine: (force-systemd-env-893512) define libvirt domain using xml: 
	I1216 20:46:34.239693   54373 main.go:141] libmachine: (force-systemd-env-893512) <domain type='kvm'>
	I1216 20:46:34.239704   54373 main.go:141] libmachine: (force-systemd-env-893512)   <name>force-systemd-env-893512</name>
	I1216 20:46:34.239721   54373 main.go:141] libmachine: (force-systemd-env-893512)   <memory unit='MiB'>2048</memory>
	I1216 20:46:34.239734   54373 main.go:141] libmachine: (force-systemd-env-893512)   <vcpu>2</vcpu>
	I1216 20:46:34.239745   54373 main.go:141] libmachine: (force-systemd-env-893512)   <features>
	I1216 20:46:34.239756   54373 main.go:141] libmachine: (force-systemd-env-893512)     <acpi/>
	I1216 20:46:34.239765   54373 main.go:141] libmachine: (force-systemd-env-893512)     <apic/>
	I1216 20:46:34.239773   54373 main.go:141] libmachine: (force-systemd-env-893512)     <pae/>
	I1216 20:46:34.239779   54373 main.go:141] libmachine: (force-systemd-env-893512)     
	I1216 20:46:34.239789   54373 main.go:141] libmachine: (force-systemd-env-893512)   </features>
	I1216 20:46:34.239803   54373 main.go:141] libmachine: (force-systemd-env-893512)   <cpu mode='host-passthrough'>
	I1216 20:46:34.239814   54373 main.go:141] libmachine: (force-systemd-env-893512)   
	I1216 20:46:34.239825   54373 main.go:141] libmachine: (force-systemd-env-893512)   </cpu>
	I1216 20:46:34.239832   54373 main.go:141] libmachine: (force-systemd-env-893512)   <os>
	I1216 20:46:34.239843   54373 main.go:141] libmachine: (force-systemd-env-893512)     <type>hvm</type>
	I1216 20:46:34.239858   54373 main.go:141] libmachine: (force-systemd-env-893512)     <boot dev='cdrom'/>
	I1216 20:46:34.239867   54373 main.go:141] libmachine: (force-systemd-env-893512)     <boot dev='hd'/>
	I1216 20:46:34.239876   54373 main.go:141] libmachine: (force-systemd-env-893512)     <bootmenu enable='no'/>
	I1216 20:46:34.239888   54373 main.go:141] libmachine: (force-systemd-env-893512)   </os>
	I1216 20:46:34.239899   54373 main.go:141] libmachine: (force-systemd-env-893512)   <devices>
	I1216 20:46:34.239912   54373 main.go:141] libmachine: (force-systemd-env-893512)     <disk type='file' device='cdrom'>
	I1216 20:46:34.239927   54373 main.go:141] libmachine: (force-systemd-env-893512)       <source file='/home/jenkins/minikube-integration/20091-7083/.minikube/machines/force-systemd-env-893512/boot2docker.iso'/>
	I1216 20:46:34.239938   54373 main.go:141] libmachine: (force-systemd-env-893512)       <target dev='hdc' bus='scsi'/>
	I1216 20:46:34.239966   54373 main.go:141] libmachine: (force-systemd-env-893512)       <readonly/>
	I1216 20:46:34.239985   54373 main.go:141] libmachine: (force-systemd-env-893512)     </disk>
	I1216 20:46:34.239999   54373 main.go:141] libmachine: (force-systemd-env-893512)     <disk type='file' device='disk'>
	I1216 20:46:34.240017   54373 main.go:141] libmachine: (force-systemd-env-893512)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1216 20:46:34.240034   54373 main.go:141] libmachine: (force-systemd-env-893512)       <source file='/home/jenkins/minikube-integration/20091-7083/.minikube/machines/force-systemd-env-893512/force-systemd-env-893512.rawdisk'/>
	I1216 20:46:34.240052   54373 main.go:141] libmachine: (force-systemd-env-893512)       <target dev='hda' bus='virtio'/>
	I1216 20:46:34.240063   54373 main.go:141] libmachine: (force-systemd-env-893512)     </disk>
	I1216 20:46:34.240073   54373 main.go:141] libmachine: (force-systemd-env-893512)     <interface type='network'>
	I1216 20:46:34.240103   54373 main.go:141] libmachine: (force-systemd-env-893512)       <source network='mk-force-systemd-env-893512'/>
	I1216 20:46:34.240140   54373 main.go:141] libmachine: (force-systemd-env-893512)       <model type='virtio'/>
	I1216 20:46:34.240154   54373 main.go:141] libmachine: (force-systemd-env-893512)     </interface>
	I1216 20:46:34.240166   54373 main.go:141] libmachine: (force-systemd-env-893512)     <interface type='network'>
	I1216 20:46:34.240179   54373 main.go:141] libmachine: (force-systemd-env-893512)       <source network='default'/>
	I1216 20:46:34.240189   54373 main.go:141] libmachine: (force-systemd-env-893512)       <model type='virtio'/>
	I1216 20:46:34.240201   54373 main.go:141] libmachine: (force-systemd-env-893512)     </interface>
	I1216 20:46:34.240212   54373 main.go:141] libmachine: (force-systemd-env-893512)     <serial type='pty'>
	I1216 20:46:34.240222   54373 main.go:141] libmachine: (force-systemd-env-893512)       <target port='0'/>
	I1216 20:46:34.240231   54373 main.go:141] libmachine: (force-systemd-env-893512)     </serial>
	I1216 20:46:34.240244   54373 main.go:141] libmachine: (force-systemd-env-893512)     <console type='pty'>
	I1216 20:46:34.240255   54373 main.go:141] libmachine: (force-systemd-env-893512)       <target type='serial' port='0'/>
	I1216 20:46:34.240268   54373 main.go:141] libmachine: (force-systemd-env-893512)     </console>
	I1216 20:46:34.240283   54373 main.go:141] libmachine: (force-systemd-env-893512)     <rng model='virtio'>
	I1216 20:46:34.240295   54373 main.go:141] libmachine: (force-systemd-env-893512)       <backend model='random'>/dev/random</backend>
	I1216 20:46:34.240305   54373 main.go:141] libmachine: (force-systemd-env-893512)     </rng>
	I1216 20:46:34.240316   54373 main.go:141] libmachine: (force-systemd-env-893512)     
	I1216 20:46:34.240324   54373 main.go:141] libmachine: (force-systemd-env-893512)     
	I1216 20:46:34.240329   54373 main.go:141] libmachine: (force-systemd-env-893512)   </devices>
	I1216 20:46:34.240344   54373 main.go:141] libmachine: (force-systemd-env-893512) </domain>
	I1216 20:46:34.240395   54373 main.go:141] libmachine: (force-systemd-env-893512) 
	I1216 20:46:34.244681   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | domain force-systemd-env-893512 has defined MAC address 52:54:00:18:c7:95 in network default
	I1216 20:46:34.245290   54373 main.go:141] libmachine: (force-systemd-env-893512) Ensuring networks are active...
	I1216 20:46:34.245306   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | domain force-systemd-env-893512 has defined MAC address 52:54:00:83:6b:76 in network mk-force-systemd-env-893512
	I1216 20:46:34.246029   54373 main.go:141] libmachine: (force-systemd-env-893512) Ensuring network default is active
	I1216 20:46:34.246385   54373 main.go:141] libmachine: (force-systemd-env-893512) Ensuring network mk-force-systemd-env-893512 is active
	I1216 20:46:34.247072   54373 main.go:141] libmachine: (force-systemd-env-893512) Getting domain xml...
	I1216 20:46:34.247752   54373 main.go:141] libmachine: (force-systemd-env-893512) Creating domain...
	I1216 20:46:35.496722   54373 main.go:141] libmachine: (force-systemd-env-893512) Waiting to get IP...
	I1216 20:46:35.497651   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | domain force-systemd-env-893512 has defined MAC address 52:54:00:83:6b:76 in network mk-force-systemd-env-893512
	I1216 20:46:35.498135   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | unable to find current IP address of domain force-systemd-env-893512 in network mk-force-systemd-env-893512
	I1216 20:46:35.498170   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | I1216 20:46:35.498084   54425 retry.go:31] will retry after 292.329334ms: waiting for machine to come up
	I1216 20:46:35.791776   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | domain force-systemd-env-893512 has defined MAC address 52:54:00:83:6b:76 in network mk-force-systemd-env-893512
	I1216 20:46:35.792490   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | unable to find current IP address of domain force-systemd-env-893512 in network mk-force-systemd-env-893512
	I1216 20:46:35.792525   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | I1216 20:46:35.792418   54425 retry.go:31] will retry after 292.710823ms: waiting for machine to come up
	I1216 20:46:34.730776   51593 addons.go:510] duration metric: took 4.157918ms for enable addons: enabled=[]
	I1216 20:46:34.730875   51593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:46:34.907809   51593 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 20:46:34.926314   51593 node_ready.go:35] waiting up to 6m0s for node "pause-022944" to be "Ready" ...
	I1216 20:46:34.929724   51593 node_ready.go:49] node "pause-022944" has status "Ready":"True"
	I1216 20:46:34.929743   51593 node_ready.go:38] duration metric: took 3.391669ms for node "pause-022944" to be "Ready" ...
	I1216 20:46:34.929752   51593 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 20:46:35.093912   51593 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-b94xm" in "kube-system" namespace to be "Ready" ...
	I1216 20:46:35.490867   51593 pod_ready.go:93] pod "coredns-668d6bf9bc-b94xm" in "kube-system" namespace has status "Ready":"True"
	I1216 20:46:35.490889   51593 pod_ready.go:82] duration metric: took 396.948764ms for pod "coredns-668d6bf9bc-b94xm" in "kube-system" namespace to be "Ready" ...
	I1216 20:46:35.490899   51593 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-022944" in "kube-system" namespace to be "Ready" ...
	I1216 20:46:35.891468   51593 pod_ready.go:93] pod "etcd-pause-022944" in "kube-system" namespace has status "Ready":"True"
	I1216 20:46:35.891495   51593 pod_ready.go:82] duration metric: took 400.589423ms for pod "etcd-pause-022944" in "kube-system" namespace to be "Ready" ...
	I1216 20:46:35.891505   51593 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-022944" in "kube-system" namespace to be "Ready" ...
	I1216 20:46:36.291023   51593 pod_ready.go:93] pod "kube-apiserver-pause-022944" in "kube-system" namespace has status "Ready":"True"
	I1216 20:46:36.291055   51593 pod_ready.go:82] duration metric: took 399.542627ms for pod "kube-apiserver-pause-022944" in "kube-system" namespace to be "Ready" ...
	I1216 20:46:36.291070   51593 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-022944" in "kube-system" namespace to be "Ready" ...
	I1216 20:46:36.691971   51593 pod_ready.go:93] pod "kube-controller-manager-pause-022944" in "kube-system" namespace has status "Ready":"True"
	I1216 20:46:36.692004   51593 pod_ready.go:82] duration metric: took 400.924036ms for pod "kube-controller-manager-pause-022944" in "kube-system" namespace to be "Ready" ...
	I1216 20:46:36.692019   51593 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lr8m7" in "kube-system" namespace to be "Ready" ...
	I1216 20:46:37.091274   51593 pod_ready.go:93] pod "kube-proxy-lr8m7" in "kube-system" namespace has status "Ready":"True"
	I1216 20:46:37.091309   51593 pod_ready.go:82] duration metric: took 399.280655ms for pod "kube-proxy-lr8m7" in "kube-system" namespace to be "Ready" ...
	I1216 20:46:37.091326   51593 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-022944" in "kube-system" namespace to be "Ready" ...
	I1216 20:46:33.810953   54398 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 20:46:33.811007   54398 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1216 20:46:33.811018   54398 cache.go:56] Caching tarball of preloaded images
	I1216 20:46:33.811093   54398 preload.go:172] Found /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 20:46:33.811106   54398 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1216 20:46:33.811182   54398 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/force-systemd-flag-406516/config.json ...
	I1216 20:46:33.811204   54398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/force-systemd-flag-406516/config.json: {Name:mkeae737e8bdb60344cf74453abb58414d34d1af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:46:33.811398   54398 start.go:360] acquireMachinesLock for force-systemd-flag-406516: {Name:mk014ce1133f8d018fee1f78c9c31a354da6dd77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 20:46:37.491652   51593 pod_ready.go:93] pod "kube-scheduler-pause-022944" in "kube-system" namespace has status "Ready":"True"
	I1216 20:46:37.491676   51593 pod_ready.go:82] duration metric: took 400.341579ms for pod "kube-scheduler-pause-022944" in "kube-system" namespace to be "Ready" ...
	I1216 20:46:37.491684   51593 pod_ready.go:39] duration metric: took 2.561924189s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 20:46:37.491702   51593 api_server.go:52] waiting for apiserver process to appear ...
	I1216 20:46:37.491769   51593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:46:37.508224   51593 api_server.go:72] duration metric: took 2.781679737s to wait for apiserver process to appear ...
	I1216 20:46:37.508272   51593 api_server.go:88] waiting for apiserver healthz status ...
	I1216 20:46:37.508298   51593 api_server.go:253] Checking apiserver healthz at https://192.168.72.189:8443/healthz ...
	I1216 20:46:37.513421   51593 api_server.go:279] https://192.168.72.189:8443/healthz returned 200:
	ok
	I1216 20:46:37.514668   51593 api_server.go:141] control plane version: v1.32.0
	I1216 20:46:37.514696   51593 api_server.go:131] duration metric: took 6.416459ms to wait for apiserver health ...
	I1216 20:46:37.514707   51593 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 20:46:37.694204   51593 system_pods.go:59] 6 kube-system pods found
	I1216 20:46:37.694243   51593 system_pods.go:61] "coredns-668d6bf9bc-b94xm" [a8987996-bf4a-40e2-8d88-903aa9218b2e] Running
	I1216 20:46:37.694251   51593 system_pods.go:61] "etcd-pause-022944" [dc603cea-4e84-4391-b1f1-4517943407db] Running
	I1216 20:46:37.694256   51593 system_pods.go:61] "kube-apiserver-pause-022944" [be01bace-6a51-448e-9cec-c0a4ecfb62ff] Running
	I1216 20:46:37.694267   51593 system_pods.go:61] "kube-controller-manager-pause-022944" [dd2ed828-82fb-48f9-827a-3447b71f8182] Running
	I1216 20:46:37.694279   51593 system_pods.go:61] "kube-proxy-lr8m7" [669f5a14-2fec-4984-87d0-49e760d25372] Running
	I1216 20:46:37.694285   51593 system_pods.go:61] "kube-scheduler-pause-022944" [5958c4b7-5fb4-4afa-b1f3-a2ed1ab5ed0b] Running
	I1216 20:46:37.694293   51593 system_pods.go:74] duration metric: took 179.5778ms to wait for pod list to return data ...
	I1216 20:46:37.694304   51593 default_sa.go:34] waiting for default service account to be created ...
	I1216 20:46:37.890964   51593 default_sa.go:45] found service account: "default"
	I1216 20:46:37.890993   51593 default_sa.go:55] duration metric: took 196.684137ms for default service account to be created ...
	I1216 20:46:37.891005   51593 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 20:46:38.093900   51593 system_pods.go:86] 6 kube-system pods found
	I1216 20:46:38.093937   51593 system_pods.go:89] "coredns-668d6bf9bc-b94xm" [a8987996-bf4a-40e2-8d88-903aa9218b2e] Running
	I1216 20:46:38.093946   51593 system_pods.go:89] "etcd-pause-022944" [dc603cea-4e84-4391-b1f1-4517943407db] Running
	I1216 20:46:38.093953   51593 system_pods.go:89] "kube-apiserver-pause-022944" [be01bace-6a51-448e-9cec-c0a4ecfb62ff] Running
	I1216 20:46:38.093959   51593 system_pods.go:89] "kube-controller-manager-pause-022944" [dd2ed828-82fb-48f9-827a-3447b71f8182] Running
	I1216 20:46:38.093965   51593 system_pods.go:89] "kube-proxy-lr8m7" [669f5a14-2fec-4984-87d0-49e760d25372] Running
	I1216 20:46:38.093978   51593 system_pods.go:89] "kube-scheduler-pause-022944" [5958c4b7-5fb4-4afa-b1f3-a2ed1ab5ed0b] Running
	I1216 20:46:38.093989   51593 system_pods.go:126] duration metric: took 202.97716ms to wait for k8s-apps to be running ...
	I1216 20:46:38.093999   51593 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 20:46:38.094063   51593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 20:46:38.110340   51593 system_svc.go:56] duration metric: took 16.33053ms WaitForService to wait for kubelet
	I1216 20:46:38.110374   51593 kubeadm.go:582] duration metric: took 3.383837806s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 20:46:38.110392   51593 node_conditions.go:102] verifying NodePressure condition ...
	I1216 20:46:38.292883   51593 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 20:46:38.292916   51593 node_conditions.go:123] node cpu capacity is 2
	I1216 20:46:38.292929   51593 node_conditions.go:105] duration metric: took 182.531603ms to run NodePressure ...
	I1216 20:46:38.292946   51593 start.go:241] waiting for startup goroutines ...
	I1216 20:46:38.292956   51593 start.go:246] waiting for cluster config update ...
	I1216 20:46:38.292968   51593 start.go:255] writing updated cluster config ...
	I1216 20:46:38.293692   51593 ssh_runner.go:195] Run: rm -f paused
	I1216 20:46:38.357004   51593 start.go:600] kubectl: 1.32.0, cluster: 1.32.0 (minor skew: 0)
	I1216 20:46:38.359157   51593 out.go:177] * Done! kubectl is now configured to use "pause-022944" cluster and "default" namespace by default
	I1216 20:46:35.202269   49163 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 20:46:35.202487   49163 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	
	==> CRI-O <==
	Dec 16 20:46:39 pause-022944 crio[2402]: time="2024-12-16 20:46:39.016561771Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734381999016537347,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125693,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0f213416-6a0e-42ff-9e48-200c73ae0cc3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 20:46:39 pause-022944 crio[2402]: time="2024-12-16 20:46:39.017305930Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a411a47a-0f74-436e-8269-24eb598a7049 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 20:46:39 pause-022944 crio[2402]: time="2024-12-16 20:46:39.017393298Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a411a47a-0f74-436e-8269-24eb598a7049 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 20:46:39 pause-022944 crio[2402]: time="2024-12-16 20:46:39.017683497Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6094fa8f92769ca5adcf52a34975a0d9d8ea80958b4e324892b3d9bcd855da1a,PodSandboxId:5cc8aa32af5701f7149e556f970de9d66e7c1095594038936146e672bd6d952a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1734381975005138689,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 045a195272ef4d8827f64b18c63d184b,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552d84b32e77a1b0b6d23519d99b5de113ece3472570ed304f8f9bb47000fef1,PodSandboxId:b4126a0fd4256b8cb2ac481830ba81ac2c9db122fe0cd5cee5b7307051297b7b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1734381974927294888,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bda30d79cf96fd10e8a93666877b7f6f,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5086843e560485814f16eebdcc52c2b92b9d690f0f13a6729b91b43b1b54f608,PodSandboxId:e8d8b9f64f17b17cb413dab1b4cce7abc48906dc27fb52eeb6095ad197c13e10,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1734381974950166064,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e7529e5f8260add0e74e083668b37f,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d116e41e987f4b9c01c03319a8c9affb25d2191ee997274388351c27eaf6c8e9,PodSandboxId:59b7e7c1d457800efa0b6f4c37e6de20558f0e36fb3364268f94d7e9c5b1edf7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1734381974898020296,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751137e41a25750ea13a412704d360ef,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f07a18b183616dcc709257e2aba5cd6c282be5dd7813297819de30d3a1f82c1,PodSandboxId:6fb2babf6d89280b705753df1c09fb797d6ecc08c23bca347aadf3ab36b2975d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1734381967028596182,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lr8m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 669f5a14-2fec-4984-87d0-49e760d25372,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c9dd0f1fe4bdecb66009d1a58456510526837d73be0121e6ef269692ba6951b,PodSandboxId:fc6add71e5bcb034ec65c1fb72fdb66bc6375bf947fc692aebaa3a868102d543,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734381968121480304,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-b94xm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8987996-bf4a-40e2-8d88-903aa9218b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f92e6114c6560a93d90ee5508a764de4ba7821079c1f9681d049a5138837ea7,PodSandboxId:b4126a0fd4256b8cb2ac481830ba81ac2c9db122fe0cd5cee5b7307051297b7b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1734381967113369768,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bda30d79cf96fd10e8a93666877b7f6f,},Annotations:map[string]string{io.kubernetes.contain
er.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d9ac0f1ddcb695e316df090a4f64ea07294b5a9c21536e57422030b7ae06a6c,PodSandboxId:e8d8b9f64f17b17cb413dab1b4cce7abc48906dc27fb52eeb6095ad197c13e10,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_EXITED,CreatedAt:1734381967067891596,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e7529e5f8260add0e74e083668b37f,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,i
o.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c482cafbd8f9c2cfef74a63eb751850fb18dd35c0440d01c46655444ca1521cb,PodSandboxId:59b7e7c1d457800efa0b6f4c37e6de20558f0e36fb3364268f94d7e9c5b1edf7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1734381967039484686,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751137e41a25750ea13a412704d360ef,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee546aaecd2dd7603a85b67d467058f021900a40a5fa269124e39a6391103c98,PodSandboxId:5cc8aa32af5701f7149e556f970de9d66e7c1095594038936146e672bd6d952a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_EXITED,CreatedAt:1734381966949782400,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 045a195272ef4d8827f64b18c63d184b,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30617acf2d2801c7042c99e0465810bb69557a04c41a3c32ea43501e24cb939c,PodSandboxId:bde86b5873f4afbd6c182d4321830ba2f2498ef349dd6299fb7e3c9f96bc4016,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1734381911663434646,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-b94xm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8987996-bf4a-40e2-8d88-903aa9218b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dbb5fcf4e0f39c2e089d4ea0db7705b4166dd0ea1dd790a9a8ffefd961b6d07,PodSandboxId:7f04e2a758fac3d9f6bb076f860b47025127cc601c2311c41576cde5c2ba24ac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_EXITED,CreatedAt:1734381910491431494,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lr8m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 669f5a14-2fec-4984-87d0-49e760d25372,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a411a47a-0f74-436e-8269-24eb598a7049 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 20:46:39 pause-022944 crio[2402]: time="2024-12-16 20:46:39.065006584Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=580b7847-ae1a-4384-a93f-294c711ed587 name=/runtime.v1.RuntimeService/Version
	Dec 16 20:46:39 pause-022944 crio[2402]: time="2024-12-16 20:46:39.065345691Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=580b7847-ae1a-4384-a93f-294c711ed587 name=/runtime.v1.RuntimeService/Version
	Dec 16 20:46:39 pause-022944 crio[2402]: time="2024-12-16 20:46:39.066941413Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9f4480c7-514d-4c7e-b3ad-5ecae875ead9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 20:46:39 pause-022944 crio[2402]: time="2024-12-16 20:46:39.067575062Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734381999067543252,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125693,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9f4480c7-514d-4c7e-b3ad-5ecae875ead9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 20:46:39 pause-022944 crio[2402]: time="2024-12-16 20:46:39.068383227Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dec4e3d9-bab1-4c49-b581-fee1f289b7f5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 20:46:39 pause-022944 crio[2402]: time="2024-12-16 20:46:39.068481221Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dec4e3d9-bab1-4c49-b581-fee1f289b7f5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 20:46:39 pause-022944 crio[2402]: time="2024-12-16 20:46:39.068886797Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6094fa8f92769ca5adcf52a34975a0d9d8ea80958b4e324892b3d9bcd855da1a,PodSandboxId:5cc8aa32af5701f7149e556f970de9d66e7c1095594038936146e672bd6d952a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1734381975005138689,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 045a195272ef4d8827f64b18c63d184b,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552d84b32e77a1b0b6d23519d99b5de113ece3472570ed304f8f9bb47000fef1,PodSandboxId:b4126a0fd4256b8cb2ac481830ba81ac2c9db122fe0cd5cee5b7307051297b7b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1734381974927294888,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bda30d79cf96fd10e8a93666877b7f6f,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5086843e560485814f16eebdcc52c2b92b9d690f0f13a6729b91b43b1b54f608,PodSandboxId:e8d8b9f64f17b17cb413dab1b4cce7abc48906dc27fb52eeb6095ad197c13e10,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1734381974950166064,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e7529e5f8260add0e74e083668b37f,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d116e41e987f4b9c01c03319a8c9affb25d2191ee997274388351c27eaf6c8e9,PodSandboxId:59b7e7c1d457800efa0b6f4c37e6de20558f0e36fb3364268f94d7e9c5b1edf7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1734381974898020296,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751137e41a25750ea13a412704d360ef,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f07a18b183616dcc709257e2aba5cd6c282be5dd7813297819de30d3a1f82c1,PodSandboxId:6fb2babf6d89280b705753df1c09fb797d6ecc08c23bca347aadf3ab36b2975d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1734381967028596182,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lr8m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 669f5a14-2fec-4984-87d0-49e760d25372,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c9dd0f1fe4bdecb66009d1a58456510526837d73be0121e6ef269692ba6951b,PodSandboxId:fc6add71e5bcb034ec65c1fb72fdb66bc6375bf947fc692aebaa3a868102d543,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734381968121480304,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-b94xm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8987996-bf4a-40e2-8d88-903aa9218b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f92e6114c6560a93d90ee5508a764de4ba7821079c1f9681d049a5138837ea7,PodSandboxId:b4126a0fd4256b8cb2ac481830ba81ac2c9db122fe0cd5cee5b7307051297b7b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1734381967113369768,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bda30d79cf96fd10e8a93666877b7f6f,},Annotations:map[string]string{io.kubernetes.contain
er.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d9ac0f1ddcb695e316df090a4f64ea07294b5a9c21536e57422030b7ae06a6c,PodSandboxId:e8d8b9f64f17b17cb413dab1b4cce7abc48906dc27fb52eeb6095ad197c13e10,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_EXITED,CreatedAt:1734381967067891596,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e7529e5f8260add0e74e083668b37f,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,i
o.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c482cafbd8f9c2cfef74a63eb751850fb18dd35c0440d01c46655444ca1521cb,PodSandboxId:59b7e7c1d457800efa0b6f4c37e6de20558f0e36fb3364268f94d7e9c5b1edf7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1734381967039484686,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751137e41a25750ea13a412704d360ef,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee546aaecd2dd7603a85b67d467058f021900a40a5fa269124e39a6391103c98,PodSandboxId:5cc8aa32af5701f7149e556f970de9d66e7c1095594038936146e672bd6d952a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_EXITED,CreatedAt:1734381966949782400,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 045a195272ef4d8827f64b18c63d184b,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30617acf2d2801c7042c99e0465810bb69557a04c41a3c32ea43501e24cb939c,PodSandboxId:bde86b5873f4afbd6c182d4321830ba2f2498ef349dd6299fb7e3c9f96bc4016,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1734381911663434646,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-b94xm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8987996-bf4a-40e2-8d88-903aa9218b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dbb5fcf4e0f39c2e089d4ea0db7705b4166dd0ea1dd790a9a8ffefd961b6d07,PodSandboxId:7f04e2a758fac3d9f6bb076f860b47025127cc601c2311c41576cde5c2ba24ac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_EXITED,CreatedAt:1734381910491431494,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lr8m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 669f5a14-2fec-4984-87d0-49e760d25372,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dec4e3d9-bab1-4c49-b581-fee1f289b7f5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 20:46:39 pause-022944 crio[2402]: time="2024-12-16 20:46:39.118566645Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=47cc7a5f-218b-4c45-a7c0-e208a1267f90 name=/runtime.v1.RuntimeService/Version
	Dec 16 20:46:39 pause-022944 crio[2402]: time="2024-12-16 20:46:39.118698116Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=47cc7a5f-218b-4c45-a7c0-e208a1267f90 name=/runtime.v1.RuntimeService/Version
	Dec 16 20:46:39 pause-022944 crio[2402]: time="2024-12-16 20:46:39.120373709Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=50f9ec59-05b4-431f-83ff-9a0bb9679ba2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 20:46:39 pause-022944 crio[2402]: time="2024-12-16 20:46:39.120913352Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734381999120881304,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125693,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=50f9ec59-05b4-431f-83ff-9a0bb9679ba2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 20:46:39 pause-022944 crio[2402]: time="2024-12-16 20:46:39.121621573Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2c9064c6-8319-417a-a517-9178d1c3bc8d name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 20:46:39 pause-022944 crio[2402]: time="2024-12-16 20:46:39.121775957Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2c9064c6-8319-417a-a517-9178d1c3bc8d name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 20:46:39 pause-022944 crio[2402]: time="2024-12-16 20:46:39.122222745Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6094fa8f92769ca5adcf52a34975a0d9d8ea80958b4e324892b3d9bcd855da1a,PodSandboxId:5cc8aa32af5701f7149e556f970de9d66e7c1095594038936146e672bd6d952a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1734381975005138689,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 045a195272ef4d8827f64b18c63d184b,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552d84b32e77a1b0b6d23519d99b5de113ece3472570ed304f8f9bb47000fef1,PodSandboxId:b4126a0fd4256b8cb2ac481830ba81ac2c9db122fe0cd5cee5b7307051297b7b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1734381974927294888,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bda30d79cf96fd10e8a93666877b7f6f,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5086843e560485814f16eebdcc52c2b92b9d690f0f13a6729b91b43b1b54f608,PodSandboxId:e8d8b9f64f17b17cb413dab1b4cce7abc48906dc27fb52eeb6095ad197c13e10,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1734381974950166064,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e7529e5f8260add0e74e083668b37f,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d116e41e987f4b9c01c03319a8c9affb25d2191ee997274388351c27eaf6c8e9,PodSandboxId:59b7e7c1d457800efa0b6f4c37e6de20558f0e36fb3364268f94d7e9c5b1edf7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1734381974898020296,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751137e41a25750ea13a412704d360ef,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f07a18b183616dcc709257e2aba5cd6c282be5dd7813297819de30d3a1f82c1,PodSandboxId:6fb2babf6d89280b705753df1c09fb797d6ecc08c23bca347aadf3ab36b2975d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1734381967028596182,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lr8m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 669f5a14-2fec-4984-87d0-49e760d25372,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c9dd0f1fe4bdecb66009d1a58456510526837d73be0121e6ef269692ba6951b,PodSandboxId:fc6add71e5bcb034ec65c1fb72fdb66bc6375bf947fc692aebaa3a868102d543,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734381968121480304,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-b94xm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8987996-bf4a-40e2-8d88-903aa9218b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f92e6114c6560a93d90ee5508a764de4ba7821079c1f9681d049a5138837ea7,PodSandboxId:b4126a0fd4256b8cb2ac481830ba81ac2c9db122fe0cd5cee5b7307051297b7b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1734381967113369768,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bda30d79cf96fd10e8a93666877b7f6f,},Annotations:map[string]string{io.kubernetes.contain
er.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d9ac0f1ddcb695e316df090a4f64ea07294b5a9c21536e57422030b7ae06a6c,PodSandboxId:e8d8b9f64f17b17cb413dab1b4cce7abc48906dc27fb52eeb6095ad197c13e10,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_EXITED,CreatedAt:1734381967067891596,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e7529e5f8260add0e74e083668b37f,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,i
o.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c482cafbd8f9c2cfef74a63eb751850fb18dd35c0440d01c46655444ca1521cb,PodSandboxId:59b7e7c1d457800efa0b6f4c37e6de20558f0e36fb3364268f94d7e9c5b1edf7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1734381967039484686,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751137e41a25750ea13a412704d360ef,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee546aaecd2dd7603a85b67d467058f021900a40a5fa269124e39a6391103c98,PodSandboxId:5cc8aa32af5701f7149e556f970de9d66e7c1095594038936146e672bd6d952a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_EXITED,CreatedAt:1734381966949782400,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 045a195272ef4d8827f64b18c63d184b,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30617acf2d2801c7042c99e0465810bb69557a04c41a3c32ea43501e24cb939c,PodSandboxId:bde86b5873f4afbd6c182d4321830ba2f2498ef349dd6299fb7e3c9f96bc4016,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1734381911663434646,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-b94xm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8987996-bf4a-40e2-8d88-903aa9218b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dbb5fcf4e0f39c2e089d4ea0db7705b4166dd0ea1dd790a9a8ffefd961b6d07,PodSandboxId:7f04e2a758fac3d9f6bb076f860b47025127cc601c2311c41576cde5c2ba24ac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_EXITED,CreatedAt:1734381910491431494,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lr8m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 669f5a14-2fec-4984-87d0-49e760d25372,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2c9064c6-8319-417a-a517-9178d1c3bc8d name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 20:46:39 pause-022944 crio[2402]: time="2024-12-16 20:46:39.169141873Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fac41a1e-62c1-4a4b-ab86-0c9b9d0299d0 name=/runtime.v1.RuntimeService/Version
	Dec 16 20:46:39 pause-022944 crio[2402]: time="2024-12-16 20:46:39.169271244Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fac41a1e-62c1-4a4b-ab86-0c9b9d0299d0 name=/runtime.v1.RuntimeService/Version
	Dec 16 20:46:39 pause-022944 crio[2402]: time="2024-12-16 20:46:39.170760048Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1cd07b7b-593f-4968-a295-d4c76968b32a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 20:46:39 pause-022944 crio[2402]: time="2024-12-16 20:46:39.171377905Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734381999171346322,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125693,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1cd07b7b-593f-4968-a295-d4c76968b32a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 20:46:39 pause-022944 crio[2402]: time="2024-12-16 20:46:39.171973060Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e4435d6d-ebf7-4533-920a-5ba212417732 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 20:46:39 pause-022944 crio[2402]: time="2024-12-16 20:46:39.172111183Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e4435d6d-ebf7-4533-920a-5ba212417732 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 20:46:39 pause-022944 crio[2402]: time="2024-12-16 20:46:39.172522060Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6094fa8f92769ca5adcf52a34975a0d9d8ea80958b4e324892b3d9bcd855da1a,PodSandboxId:5cc8aa32af5701f7149e556f970de9d66e7c1095594038936146e672bd6d952a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1734381975005138689,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 045a195272ef4d8827f64b18c63d184b,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552d84b32e77a1b0b6d23519d99b5de113ece3472570ed304f8f9bb47000fef1,PodSandboxId:b4126a0fd4256b8cb2ac481830ba81ac2c9db122fe0cd5cee5b7307051297b7b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1734381974927294888,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bda30d79cf96fd10e8a93666877b7f6f,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5086843e560485814f16eebdcc52c2b92b9d690f0f13a6729b91b43b1b54f608,PodSandboxId:e8d8b9f64f17b17cb413dab1b4cce7abc48906dc27fb52eeb6095ad197c13e10,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1734381974950166064,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e7529e5f8260add0e74e083668b37f,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d116e41e987f4b9c01c03319a8c9affb25d2191ee997274388351c27eaf6c8e9,PodSandboxId:59b7e7c1d457800efa0b6f4c37e6de20558f0e36fb3364268f94d7e9c5b1edf7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1734381974898020296,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751137e41a25750ea13a412704d360ef,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f07a18b183616dcc709257e2aba5cd6c282be5dd7813297819de30d3a1f82c1,PodSandboxId:6fb2babf6d89280b705753df1c09fb797d6ecc08c23bca347aadf3ab36b2975d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1734381967028596182,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lr8m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 669f5a14-2fec-4984-87d0-49e760d25372,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c9dd0f1fe4bdecb66009d1a58456510526837d73be0121e6ef269692ba6951b,PodSandboxId:fc6add71e5bcb034ec65c1fb72fdb66bc6375bf947fc692aebaa3a868102d543,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734381968121480304,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-b94xm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8987996-bf4a-40e2-8d88-903aa9218b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f92e6114c6560a93d90ee5508a764de4ba7821079c1f9681d049a5138837ea7,PodSandboxId:b4126a0fd4256b8cb2ac481830ba81ac2c9db122fe0cd5cee5b7307051297b7b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1734381967113369768,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bda30d79cf96fd10e8a93666877b7f6f,},Annotations:map[string]string{io.kubernetes.contain
er.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d9ac0f1ddcb695e316df090a4f64ea07294b5a9c21536e57422030b7ae06a6c,PodSandboxId:e8d8b9f64f17b17cb413dab1b4cce7abc48906dc27fb52eeb6095ad197c13e10,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_EXITED,CreatedAt:1734381967067891596,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e7529e5f8260add0e74e083668b37f,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,i
o.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c482cafbd8f9c2cfef74a63eb751850fb18dd35c0440d01c46655444ca1521cb,PodSandboxId:59b7e7c1d457800efa0b6f4c37e6de20558f0e36fb3364268f94d7e9c5b1edf7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1734381967039484686,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751137e41a25750ea13a412704d360ef,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee546aaecd2dd7603a85b67d467058f021900a40a5fa269124e39a6391103c98,PodSandboxId:5cc8aa32af5701f7149e556f970de9d66e7c1095594038936146e672bd6d952a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_EXITED,CreatedAt:1734381966949782400,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 045a195272ef4d8827f64b18c63d184b,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30617acf2d2801c7042c99e0465810bb69557a04c41a3c32ea43501e24cb939c,PodSandboxId:bde86b5873f4afbd6c182d4321830ba2f2498ef349dd6299fb7e3c9f96bc4016,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1734381911663434646,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-b94xm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8987996-bf4a-40e2-8d88-903aa9218b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dbb5fcf4e0f39c2e089d4ea0db7705b4166dd0ea1dd790a9a8ffefd961b6d07,PodSandboxId:7f04e2a758fac3d9f6bb076f860b47025127cc601c2311c41576cde5c2ba24ac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_EXITED,CreatedAt:1734381910491431494,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lr8m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 669f5a14-2fec-4984-87d0-49e760d25372,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e4435d6d-ebf7-4533-920a-5ba212417732 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	6094fa8f92769       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   24 seconds ago       Running             kube-controller-manager   2                   5cc8aa32af570       kube-controller-manager-pause-022944
	5086843e56048       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   24 seconds ago       Running             kube-scheduler            2                   e8d8b9f64f17b       kube-scheduler-pause-022944
	552d84b32e77a       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   24 seconds ago       Running             kube-apiserver            2                   b4126a0fd4256       kube-apiserver-pause-022944
	d116e41e987f4       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   24 seconds ago       Running             etcd                      2                   59b7e7c1d4578       etcd-pause-022944
	8c9dd0f1fe4bd       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   31 seconds ago       Running             coredns                   1                   fc6add71e5bcb       coredns-668d6bf9bc-b94xm
	3f92e6114c656       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   32 seconds ago       Exited              kube-apiserver            1                   b4126a0fd4256       kube-apiserver-pause-022944
	4d9ac0f1ddcb6       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   32 seconds ago       Exited              kube-scheduler            1                   e8d8b9f64f17b       kube-scheduler-pause-022944
	c482cafbd8f9c       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   32 seconds ago       Exited              etcd                      1                   59b7e7c1d4578       etcd-pause-022944
	7f07a18b18361       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   32 seconds ago       Running             kube-proxy                1                   6fb2babf6d892       kube-proxy-lr8m7
	ee546aaecd2dd       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   32 seconds ago       Exited              kube-controller-manager   1                   5cc8aa32af570       kube-controller-manager-pause-022944
	30617acf2d280       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   0                   bde86b5873f4a       coredns-668d6bf9bc-b94xm
	5dbb5fcf4e0f3       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   About a minute ago   Exited              kube-proxy                0                   7f04e2a758fac       kube-proxy-lr8m7
	
	
	==> coredns [30617acf2d2801c7042c99e0465810bb69557a04c41a3c32ea43501e24cb939c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/kubernetes: Trace[487060525]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Dec-2024 20:45:12.034) (total time: 29578ms):
	Trace[487060525]: ---"Objects listed" error:<nil> 29578ms (20:45:41.613)
	Trace[487060525]: [29.578832566s] [29.578832566s] END
	[INFO] plugin/kubernetes: Trace[346612538]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Dec-2024 20:45:12.029) (total time: 29584ms):
	Trace[346612538]: ---"Objects listed" error:<nil> 29584ms (20:45:41.613)
	Trace[346612538]: [29.584722103s] [29.584722103s] END
	[INFO] plugin/kubernetes: Trace[31414365]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Dec-2024 20:45:12.034) (total time: 29579ms):
	Trace[31414365]: ---"Objects listed" error:<nil> 29579ms (20:45:41.614)
	Trace[31414365]: [29.579975093s] [29.579975093s] END
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	[INFO] Reloading complete
	[INFO] 127.0.0.1:53038 - 61622 "HINFO IN 8161388205809105035.1151591015590456726. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.046913983s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8c9dd0f1fe4bdecb66009d1a58456510526837d73be0121e6ef269692ba6951b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] 127.0.0.1:56759 - 75 "HINFO IN 7367595749151626410.4636715588362668321. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027954479s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> describe nodes <==
	Name:               pause-022944
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-022944
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=74e51ab701402ddc00f8ba70f2a2775c7dcd6477
	                    minikube.k8s.io/name=pause-022944
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_16T20_45_06_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Dec 2024 20:45:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-022944
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Dec 2024 20:46:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Dec 2024 20:46:18 +0000   Mon, 16 Dec 2024 20:45:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Dec 2024 20:46:18 +0000   Mon, 16 Dec 2024 20:45:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Dec 2024 20:46:18 +0000   Mon, 16 Dec 2024 20:45:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Dec 2024 20:46:18 +0000   Mon, 16 Dec 2024 20:45:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.189
	  Hostname:    pause-022944
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 c03fe8ee6686421d97e89d57e2c72201
	  System UUID:                c03fe8ee-6686-421d-97e8-9d57e2c72201
	  Boot ID:                    0f74cc51-9a62-4fac-879f-d09f143e28d3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-b94xm                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     89s
	  kube-system                 etcd-pause-022944                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         94s
	  kube-system                 kube-apiserver-pause-022944             250m (12%)    0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 kube-controller-manager-pause-022944    200m (10%)    0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 kube-proxy-lr8m7                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-scheduler-pause-022944             100m (5%)     0 (0%)      0 (0%)           0 (0%)         95s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 88s                kube-proxy       
	  Normal  Starting                 19s                kube-proxy       
	  Normal  NodeHasSufficientPID     94s                kubelet          Node pause-022944 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  94s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  94s                kubelet          Node pause-022944 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    94s                kubelet          Node pause-022944 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 94s                kubelet          Starting kubelet.
	  Normal  NodeReady                93s                kubelet          Node pause-022944 status is now: NodeReady
	  Normal  RegisteredNode           90s                node-controller  Node pause-022944 event: Registered Node pause-022944 in Controller
	  Normal  Starting                 25s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s (x8 over 25s)  kubelet          Node pause-022944 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 25s)  kubelet          Node pause-022944 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x7 over 25s)  kubelet          Node pause-022944 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18s                node-controller  Node pause-022944 event: Registered Node pause-022944 in Controller
	
	
	==> dmesg <==
	[  +0.058906] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.079445] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.219052] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.139162] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.296069] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +4.401628] systemd-fstab-generator[745]: Ignoring "noauto" option for root device
	[  +0.073434] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.061570] systemd-fstab-generator[877]: Ignoring "noauto" option for root device
	[  +1.234950] kauditd_printk_skb: 57 callbacks suppressed
	[Dec16 20:45] systemd-fstab-generator[1237]: Ignoring "noauto" option for root device
	[  +0.088409] kauditd_printk_skb: 30 callbacks suppressed
	[  +4.973344] systemd-fstab-generator[1373]: Ignoring "noauto" option for root device
	[  +0.054865] kauditd_printk_skb: 21 callbacks suppressed
	[ +11.817898] kauditd_printk_skb: 88 callbacks suppressed
	[ +37.133621] systemd-fstab-generator[2328]: Ignoring "noauto" option for root device
	[  +0.150066] systemd-fstab-generator[2340]: Ignoring "noauto" option for root device
	[  +0.186534] systemd-fstab-generator[2354]: Ignoring "noauto" option for root device
	[  +0.146567] systemd-fstab-generator[2366]: Ignoring "noauto" option for root device
	[  +0.283910] systemd-fstab-generator[2394]: Ignoring "noauto" option for root device
	[Dec16 20:46] systemd-fstab-generator[2522]: Ignoring "noauto" option for root device
	[  +0.099392] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.507514] kauditd_printk_skb: 85 callbacks suppressed
	[  +2.120046] systemd-fstab-generator[3308]: Ignoring "noauto" option for root device
	[  +5.946334] kauditd_printk_skb: 40 callbacks suppressed
	[ +14.853879] systemd-fstab-generator[3675]: Ignoring "noauto" option for root device
	
	
	==> etcd [c482cafbd8f9c2cfef74a63eb751850fb18dd35c0440d01c46655444ca1521cb] <==
	{"level":"info","ts":"2024-12-16T20:46:09.330474Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"387e2109401c13dc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-12-16T20:46:09.330545Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"387e2109401c13dc received MsgPreVoteResp from 387e2109401c13dc at term 2"}
	{"level":"info","ts":"2024-12-16T20:46:09.330591Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"387e2109401c13dc became candidate at term 3"}
	{"level":"info","ts":"2024-12-16T20:46:09.330630Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"387e2109401c13dc received MsgVoteResp from 387e2109401c13dc at term 3"}
	{"level":"info","ts":"2024-12-16T20:46:09.330659Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"387e2109401c13dc became leader at term 3"}
	{"level":"info","ts":"2024-12-16T20:46:09.330685Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 387e2109401c13dc elected leader 387e2109401c13dc at term 3"}
	{"level":"info","ts":"2024-12-16T20:46:09.335902Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"387e2109401c13dc","local-member-attributes":"{Name:pause-022944 ClientURLs:[https://192.168.72.189:2379]}","request-path":"/0/members/387e2109401c13dc/attributes","cluster-id":"69ad83e0a7175c67","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-16T20:46:09.339203Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-16T20:46:09.339820Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-16T20:46:09.346128Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-16T20:46:09.346718Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-16T20:46:09.340103Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-16T20:46:09.348219Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-16T20:46:09.348721Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-16T20:46:09.349737Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.189:2379"}
	{"level":"info","ts":"2024-12-16T20:46:12.337260Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-12-16T20:46:12.337331Z","caller":"embed/etcd.go:378","msg":"closing etcd server","name":"pause-022944","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.189:2380"],"advertise-client-urls":["https://192.168.72.189:2379"]}
	{"level":"warn","ts":"2024-12-16T20:46:12.337411Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-16T20:46:12.337457Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-16T20:46:12.337532Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.72.189:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-16T20:46:12.337541Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.72.189:2379: use of closed network connection"}
	{"level":"info","ts":"2024-12-16T20:46:12.339165Z","caller":"etcdserver/server.go:1543","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"387e2109401c13dc","current-leader-member-id":"387e2109401c13dc"}
	{"level":"info","ts":"2024-12-16T20:46:12.343142Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.72.189:2380"}
	{"level":"info","ts":"2024-12-16T20:46:12.343311Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.72.189:2380"}
	{"level":"info","ts":"2024-12-16T20:46:12.343324Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"pause-022944","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.189:2380"],"advertise-client-urls":["https://192.168.72.189:2379"]}
	
	
	==> etcd [d116e41e987f4b9c01c03319a8c9affb25d2191ee997274388351c27eaf6c8e9] <==
	{"level":"info","ts":"2024-12-16T20:46:15.321598Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"387e2109401c13dc switched to configuration voters=(4070727436803511260)"}
	{"level":"info","ts":"2024-12-16T20:46:15.321648Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"69ad83e0a7175c67","local-member-id":"387e2109401c13dc","added-peer-id":"387e2109401c13dc","added-peer-peer-urls":["https://192.168.72.189:2380"]}
	{"level":"info","ts":"2024-12-16T20:46:15.321728Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"69ad83e0a7175c67","local-member-id":"387e2109401c13dc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-16T20:46:15.321748Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-16T20:46:15.325649Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-16T20:46:15.326254Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"387e2109401c13dc","initial-advertise-peer-urls":["https://192.168.72.189:2380"],"listen-peer-urls":["https://192.168.72.189:2380"],"advertise-client-urls":["https://192.168.72.189:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.189:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-16T20:46:15.326545Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-16T20:46:15.325865Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.72.189:2380"}
	{"level":"info","ts":"2024-12-16T20:46:15.332123Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.72.189:2380"}
	{"level":"info","ts":"2024-12-16T20:46:16.482154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"387e2109401c13dc is starting a new election at term 3"}
	{"level":"info","ts":"2024-12-16T20:46:16.482268Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"387e2109401c13dc became pre-candidate at term 3"}
	{"level":"info","ts":"2024-12-16T20:46:16.482313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"387e2109401c13dc received MsgPreVoteResp from 387e2109401c13dc at term 3"}
	{"level":"info","ts":"2024-12-16T20:46:16.482342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"387e2109401c13dc became candidate at term 4"}
	{"level":"info","ts":"2024-12-16T20:46:16.482363Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"387e2109401c13dc received MsgVoteResp from 387e2109401c13dc at term 4"}
	{"level":"info","ts":"2024-12-16T20:46:16.482384Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"387e2109401c13dc became leader at term 4"}
	{"level":"info","ts":"2024-12-16T20:46:16.482402Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 387e2109401c13dc elected leader 387e2109401c13dc at term 4"}
	{"level":"info","ts":"2024-12-16T20:46:16.488433Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"387e2109401c13dc","local-member-attributes":"{Name:pause-022944 ClientURLs:[https://192.168.72.189:2379]}","request-path":"/0/members/387e2109401c13dc/attributes","cluster-id":"69ad83e0a7175c67","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-16T20:46:16.488583Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-16T20:46:16.489035Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-16T20:46:16.489690Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-16T20:46:16.492833Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-16T20:46:16.493480Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-16T20:46:16.528479Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.189:2379"}
	{"level":"info","ts":"2024-12-16T20:46:16.495166Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-16T20:46:16.528931Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 20:46:39 up 2 min,  0 users,  load average: 1.28, 0.53, 0.20
	Linux pause-022944 5.10.207 #1 SMP Thu Dec 12 23:38:00 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3f92e6114c6560a93d90ee5508a764de4ba7821079c1f9681d049a5138837ea7] <==
	I1216 20:46:11.094319       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1216 20:46:11.094352       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for crd-autoregister" logger="UnhandledError"
	I1216 20:46:11.095250       1 crd_finalizer.go:273] Shutting down CRDFinalizer
	I1216 20:46:11.095330       1 apiapproval_controller.go:193] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I1216 20:46:11.095353       1 nonstructuralschema_controller.go:199] Shutting down NonStructuralSchemaConditionController
	I1216 20:46:11.095374       1 establishing_controller.go:85] Shutting down EstablishingController
	I1216 20:46:11.095396       1 crdregistration_controller.go:119] Shutting down crd-autoregister controller
	I1216 20:46:11.095420       1 naming_controller.go:298] Shutting down NamingConditionController
	E1216 20:46:11.095450       1 controller.go:95] "Unhandled Error" err="timed out waiting for caches to sync" logger="UnhandledError"
	E1216 20:46:11.095477       1 controller.go:148] "Unhandled Error" err="timed out waiting for caches to sync" logger="UnhandledError"
	E1216 20:46:11.096170       1 cache.go:35] "Unhandled Error" err="Unable to sync caches for APIServiceRegistrationController controller" logger="UnhandledError"
	I1216 20:46:11.096256       1 controller.go:84] Shutting down OpenAPI AggregationController
	E1216 20:46:11.096288       1 cache.go:35] "Unhandled Error" err="Unable to sync caches for LocalAvailability controller" logger="UnhandledError"
	F1216 20:46:11.096310       1 hooks.go:204] PostStartHook "crd-informer-synced" failed: timed out waiting for the condition
	E1216 20:46:11.166314       1 gc_controller.go:84] "Unhandled Error" err="timed out waiting for caches to sync" logger="UnhandledError"
	I1216 20:46:11.169227       1 gc_controller.go:85] Shutting down apiserver lease garbage collector
	I1216 20:46:11.169316       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	E1216 20:46:11.169379       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for cluster_authentication_trust_controller" logger="UnhandledError"
	I1216 20:46:11.169422       1 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E1216 20:46:11.169492       1 cache.go:35] "Unhandled Error" err="Unable to sync caches for RemoteAvailability controller" logger="UnhandledError"
	I1216 20:46:11.169541       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	E1216 20:46:11.169588       1 customresource_discovery_controller.go:295] "Unhandled Error" err="timed out waiting for caches to sync" logger="UnhandledError"
	I1216 20:46:11.169620       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	E1216 20:46:11.169652       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for configmaps" logger="UnhandledError"
	E1216 20:46:11.169683       1 controller.go:89] "Unhandled Error" err="timed out waiting for caches to sync" logger="UnhandledError"
	
	
	==> kube-apiserver [552d84b32e77a1b0b6d23519d99b5de113ece3472570ed304f8f9bb47000fef1] <==
	I1216 20:46:18.208565       1 shared_informer.go:320] Caches are synced for configmaps
	I1216 20:46:18.208642       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1216 20:46:18.211896       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1216 20:46:18.212609       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1216 20:46:18.219956       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1216 20:46:18.220374       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1216 20:46:18.246332       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1216 20:46:18.223465       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1216 20:46:18.240423       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1216 20:46:18.253969       1 policy_source.go:240] refreshing policies
	I1216 20:46:18.254118       1 aggregator.go:171] initial CRD sync complete...
	I1216 20:46:18.254167       1 autoregister_controller.go:144] Starting autoregister controller
	I1216 20:46:18.254195       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1216 20:46:18.254216       1 cache.go:39] Caches are synced for autoregister controller
	I1216 20:46:18.246276       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1216 20:46:18.270097       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 20:46:18.415932       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1216 20:46:19.108233       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1216 20:46:19.991572       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1216 20:46:20.036958       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1216 20:46:20.070996       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 20:46:20.080402       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 20:46:21.619947       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1216 20:46:21.668429       1 controller.go:615] quota admission added evaluator for: endpoints
	I1216 20:46:22.086698       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [6094fa8f92769ca5adcf52a34975a0d9d8ea80958b4e324892b3d9bcd855da1a] <==
	I1216 20:46:21.418169       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I1216 20:46:21.418285       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1216 20:46:21.418338       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I1216 20:46:21.421341       1 shared_informer.go:320] Caches are synced for node
	I1216 20:46:21.421394       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1216 20:46:21.421416       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1216 20:46:21.421420       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1216 20:46:21.421424       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1216 20:46:21.421491       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-022944"
	I1216 20:46:21.427401       1 shared_informer.go:320] Caches are synced for resource quota
	I1216 20:46:21.429926       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1216 20:46:21.441258       1 shared_informer.go:320] Caches are synced for resource quota
	I1216 20:46:21.452708       1 shared_informer.go:320] Caches are synced for garbage collector
	I1216 20:46:21.452791       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1216 20:46:21.452804       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1216 20:46:21.458595       1 shared_informer.go:320] Caches are synced for stateful set
	I1216 20:46:21.459913       1 shared_informer.go:320] Caches are synced for expand
	I1216 20:46:21.461962       1 shared_informer.go:320] Caches are synced for endpoint
	I1216 20:46:21.464020       1 shared_informer.go:320] Caches are synced for deployment
	I1216 20:46:21.464940       1 shared_informer.go:320] Caches are synced for persistent volume
	I1216 20:46:21.490943       1 shared_informer.go:320] Caches are synced for garbage collector
	I1216 20:46:22.094301       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="72.320591ms"
	I1216 20:46:22.094976       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="73.904µs"
	I1216 20:46:22.118145       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="22.602068ms"
	I1216 20:46:22.118636       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="65.152µs"
	
	
	==> kube-controller-manager [ee546aaecd2dd7603a85b67d467058f021900a40a5fa269124e39a6391103c98] <==
	I1216 20:46:09.093772       1 serving.go:386] Generated self-signed cert in-memory
	I1216 20:46:09.342531       1 controllermanager.go:185] "Starting" version="v1.32.0"
	I1216 20:46:09.342635       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 20:46:09.345043       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1216 20:46:09.347461       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1216 20:46:09.347540       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1216 20:46:09.347602       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-proxy [5dbb5fcf4e0f39c2e089d4ea0db7705b4166dd0ea1dd790a9a8ffefd961b6d07] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1216 20:45:11.186917       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1216 20:45:11.292844       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.72.189"]
	E1216 20:45:11.292941       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 20:45:11.446358       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1216 20:45:11.446458       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1216 20:45:11.446523       1 server_linux.go:170] "Using iptables Proxier"
	I1216 20:45:11.466193       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 20:45:11.466825       1 server.go:497] "Version info" version="v1.32.0"
	I1216 20:45:11.466838       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 20:45:11.471481       1 config.go:199] "Starting service config controller"
	I1216 20:45:11.471527       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1216 20:45:11.471554       1 config.go:105] "Starting endpoint slice config controller"
	I1216 20:45:11.471558       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1216 20:45:11.489708       1 config.go:329] "Starting node config controller"
	I1216 20:45:11.489740       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1216 20:45:11.571696       1 shared_informer.go:320] Caches are synced for service config
	I1216 20:45:11.571772       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1216 20:45:11.591843       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [7f07a18b183616dcc709257e2aba5cd6c282be5dd7813297819de30d3a1f82c1] <==
	E1216 20:46:09.477869       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1216 20:46:12.184684       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-022944\": dial tcp 192.168.72.189:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.72.189:59804->192.168.72.189:8443: read: connection reset by peer"
	E1216 20:46:13.370704       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-022944\": dial tcp 192.168.72.189:8443: connect: connection refused"
	E1216 20:46:15.480575       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-022944\": dial tcp 192.168.72.189:8443: connect: connection refused"
	I1216 20:46:19.657722       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.72.189"]
	E1216 20:46:19.657965       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 20:46:19.698847       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1216 20:46:19.698959       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1216 20:46:19.699008       1 server_linux.go:170] "Using iptables Proxier"
	I1216 20:46:19.702023       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 20:46:19.702472       1 server.go:497] "Version info" version="v1.32.0"
	I1216 20:46:19.702520       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 20:46:19.703884       1 config.go:199] "Starting service config controller"
	I1216 20:46:19.703943       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1216 20:46:19.703978       1 config.go:105] "Starting endpoint slice config controller"
	I1216 20:46:19.703994       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1216 20:46:19.704591       1 config.go:329] "Starting node config controller"
	I1216 20:46:19.704638       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1216 20:46:19.804267       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1216 20:46:19.804296       1 shared_informer.go:320] Caches are synced for service config
	I1216 20:46:19.804870       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4d9ac0f1ddcb695e316df090a4f64ea07294b5a9c21536e57422030b7ae06a6c] <==
	I1216 20:46:09.019782       1 serving.go:386] Generated self-signed cert in-memory
	W1216 20:46:12.186293       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.72.189:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.72.189:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.72.189:59820->192.168.72.189:8443: read: connection reset by peer
	W1216 20:46:12.186364       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1216 20:46:12.186376       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1216 20:46:12.202410       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1216 20:46:12.202433       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1216 20:46:12.202457       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I1216 20:46:12.207406       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E1216 20:46:12.207538       1 server.go:266] "waiting for handlers to sync" err="context canceled"
	E1216 20:46:12.207601       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [5086843e560485814f16eebdcc52c2b92b9d690f0f13a6729b91b43b1b54f608] <==
	I1216 20:46:16.051686       1 serving.go:386] Generated self-signed cert in-memory
	I1216 20:46:18.313249       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1216 20:46:18.313308       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 20:46:18.337577       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1216 20:46:18.337633       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1216 20:46:18.337711       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 20:46:18.337753       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1216 20:46:18.337774       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1216 20:46:18.337808       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1216 20:46:18.338746       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1216 20:46:18.338829       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1216 20:46:18.438565       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I1216 20:46:18.439042       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1216 20:46:18.439846       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Dec 16 20:46:17 pause-022944 kubelet[3315]: E1216 20:46:17.382220    3315 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-022944\" not found" node="pause-022944"
	Dec 16 20:46:17 pause-022944 kubelet[3315]: E1216 20:46:17.383158    3315 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-022944\" not found" node="pause-022944"
	Dec 16 20:46:17 pause-022944 kubelet[3315]: E1216 20:46:17.383907    3315 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-022944\" not found" node="pause-022944"
	Dec 16 20:46:18 pause-022944 kubelet[3315]: I1216 20:46:18.234379    3315 apiserver.go:52] "Watching apiserver"
	Dec 16 20:46:18 pause-022944 kubelet[3315]: I1216 20:46:18.253234    3315 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-022944"
	Dec 16 20:46:18 pause-022944 kubelet[3315]: I1216 20:46:18.354840    3315 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Dec 16 20:46:18 pause-022944 kubelet[3315]: I1216 20:46:18.369219    3315 kubelet_node_status.go:125] "Node was previously registered" node="pause-022944"
	Dec 16 20:46:18 pause-022944 kubelet[3315]: I1216 20:46:18.369466    3315 kubelet_node_status.go:79] "Successfully registered node" node="pause-022944"
	Dec 16 20:46:18 pause-022944 kubelet[3315]: I1216 20:46:18.369566    3315 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 16 20:46:18 pause-022944 kubelet[3315]: I1216 20:46:18.370650    3315 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 16 20:46:18 pause-022944 kubelet[3315]: I1216 20:46:18.380537    3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/669f5a14-2fec-4984-87d0-49e760d25372-xtables-lock\") pod \"kube-proxy-lr8m7\" (UID: \"669f5a14-2fec-4984-87d0-49e760d25372\") " pod="kube-system/kube-proxy-lr8m7"
	Dec 16 20:46:18 pause-022944 kubelet[3315]: I1216 20:46:18.380621    3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/669f5a14-2fec-4984-87d0-49e760d25372-lib-modules\") pod \"kube-proxy-lr8m7\" (UID: \"669f5a14-2fec-4984-87d0-49e760d25372\") " pod="kube-system/kube-proxy-lr8m7"
	Dec 16 20:46:18 pause-022944 kubelet[3315]: I1216 20:46:18.384485    3315 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-022944"
	Dec 16 20:46:18 pause-022944 kubelet[3315]: E1216 20:46:18.415625    3315 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-022944\" already exists" pod="kube-system/kube-scheduler-pause-022944"
	Dec 16 20:46:18 pause-022944 kubelet[3315]: I1216 20:46:18.415808    3315 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-022944"
	Dec 16 20:46:18 pause-022944 kubelet[3315]: E1216 20:46:18.423590    3315 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-022944\" already exists" pod="kube-system/kube-apiserver-pause-022944"
	Dec 16 20:46:18 pause-022944 kubelet[3315]: E1216 20:46:18.435757    3315 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-pause-022944\" already exists" pod="kube-system/etcd-pause-022944"
	Dec 16 20:46:18 pause-022944 kubelet[3315]: I1216 20:46:18.435935    3315 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-022944"
	Dec 16 20:46:18 pause-022944 kubelet[3315]: E1216 20:46:18.452891    3315 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-022944\" already exists" pod="kube-system/kube-apiserver-pause-022944"
	Dec 16 20:46:18 pause-022944 kubelet[3315]: I1216 20:46:18.453015    3315 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-022944"
	Dec 16 20:46:18 pause-022944 kubelet[3315]: E1216 20:46:18.471282    3315 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-022944\" already exists" pod="kube-system/kube-controller-manager-pause-022944"
	Dec 16 20:46:24 pause-022944 kubelet[3315]: E1216 20:46:24.444024    3315 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734381984443446391,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125693,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 20:46:24 pause-022944 kubelet[3315]: E1216 20:46:24.444763    3315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734381984443446391,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125693,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 20:46:34 pause-022944 kubelet[3315]: E1216 20:46:34.448018    3315 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734381994447307922,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125693,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 20:46:34 pause-022944 kubelet[3315]: E1216 20:46:34.448408    3315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734381994447307922,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125693,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-022944 -n pause-022944
helpers_test.go:261: (dbg) Run:  kubectl --context pause-022944 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-022944 -n pause-022944
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-022944 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-022944 logs -n 25: (1.372381758s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-647112 sudo cat                            | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p cilium-647112 sudo cat                            | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p cilium-647112 sudo                                | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-647112 sudo                                | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-647112 sudo cat                            | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-647112 sudo docker                         | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-647112 sudo                                | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-647112 sudo                                | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-647112 sudo cat                            | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-647112 sudo cat                            | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-647112 sudo                                | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-647112 sudo                                | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-647112 sudo                                | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-647112 sudo cat                            | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-647112 sudo cat                            | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-647112 sudo                                | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-647112 sudo                                | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-647112 sudo                                | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-647112 sudo find                           | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-647112 sudo crio                           | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-647112                                     | cilium-647112             | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC | 16 Dec 24 20:46 UTC |
	| ssh     | -p NoKubernetes-545724 sudo                          | NoKubernetes-545724       | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | systemctl is-active --quiet                          |                           |         |         |                     |                     |
	|         | service kubelet                                      |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-545724                               | NoKubernetes-545724       | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC | 16 Dec 24 20:46 UTC |
	| start   | -p force-systemd-env-893512                          | force-systemd-env-893512  | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| start   | -p force-systemd-flag-406516                         | force-systemd-flag-406516 | jenkins | v1.34.0 | 16 Dec 24 20:46 UTC |                     |
	|         | --memory=2048 --force-systemd                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 20:46:32
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 20:46:32.530534   54398 out.go:345] Setting OutFile to fd 1 ...
	I1216 20:46:32.530650   54398 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:46:32.530657   54398 out.go:358] Setting ErrFile to fd 2...
	I1216 20:46:32.530661   54398 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:46:32.530913   54398 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-7083/.minikube/bin
	I1216 20:46:32.531665   54398 out.go:352] Setting JSON to false
	I1216 20:46:32.532701   54398 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5338,"bootTime":1734376655,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 20:46:32.532834   54398 start.go:139] virtualization: kvm guest
	I1216 20:46:32.535210   54398 out.go:177] * [force-systemd-flag-406516] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1216 20:46:32.536763   54398 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 20:46:32.536776   54398 notify.go:220] Checking for updates...
	I1216 20:46:32.539480   54398 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 20:46:32.540914   54398 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 20:46:32.542477   54398 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-7083/.minikube
	I1216 20:46:32.544007   54398 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 20:46:32.545628   54398 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 20:46:32.547435   54398 config.go:182] Loaded profile config "kubernetes-upgrade-560677": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1216 20:46:32.547573   54398 config.go:182] Loaded profile config "pause-022944": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 20:46:32.547685   54398 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 20:46:32.582301   54398 out.go:177] * Using the kvm2 driver based on user configuration
	I1216 20:46:32.504657   54373 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.34.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.34.0/docker-machine-driver-kvm2-amd64.sha256 -> /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:46:33.788889   54373 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 20:46:33.789180   54373 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 20:46:33.789217   54373 cni.go:84] Creating CNI manager for ""
	I1216 20:46:33.789255   54373 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 20:46:33.789266   54373 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 20:46:33.789324   54373 start.go:340] cluster config:
	{Name:force-systemd-env-893512 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:force-systemd-env-893512 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 20:46:33.789454   54373 iso.go:125] acquiring lock: {Name:mk60ed2ba7ed00047edacd09f4f6bf84214f0831 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 20:46:33.791821   54373 out.go:177] * Starting "force-systemd-env-893512" primary control-plane node in "force-systemd-env-893512" cluster
	I1216 20:46:33.793288   54373 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 20:46:33.793331   54373 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1216 20:46:33.793341   54373 cache.go:56] Caching tarball of preloaded images
	I1216 20:46:33.793435   54373 preload.go:172] Found /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 20:46:33.793447   54373 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1216 20:46:33.793542   54373 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/force-systemd-env-893512/config.json ...
	I1216 20:46:33.793559   54373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/force-systemd-env-893512/config.json: {Name:mka0d3de6c5ec33a2b9026b545de211f5b96bb45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:46:33.793717   54373 start.go:360] acquireMachinesLock for force-systemd-env-893512: {Name:mk014ce1133f8d018fee1f78c9c31a354da6dd77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 20:46:33.793770   54373 start.go:364] duration metric: took 28.08µs to acquireMachinesLock for "force-systemd-env-893512"
	I1216 20:46:33.793798   54373 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-893512 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.32.0 ClusterName:force-systemd-env-893512 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 20:46:33.793883   54373 start.go:125] createHost starting for "" (driver="kvm2")
	I1216 20:46:32.583637   54398 start.go:297] selected driver: kvm2
	I1216 20:46:32.583657   54398 start.go:901] validating driver "kvm2" against <nil>
	I1216 20:46:32.583670   54398 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 20:46:32.584429   54398 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 20:46:33.788985   54398 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20091-7083/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1216 20:46:33.807111   54398 install.go:137] /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1216 20:46:33.807159   54398 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 20:46:33.807458   54398 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 20:46:33.807491   54398 cni.go:84] Creating CNI manager for ""
	I1216 20:46:33.807541   54398 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 20:46:33.807554   54398 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 20:46:33.807607   54398 start.go:340] cluster config:
	{Name:force-systemd-flag-406516 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:force-systemd-flag-406516 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 20:46:33.807740   54398 iso.go:125] acquiring lock: {Name:mk60ed2ba7ed00047edacd09f4f6bf84214f0831 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 20:46:33.809737   54398 out.go:177] * Starting "force-systemd-flag-406516" primary control-plane node in "force-systemd-flag-406516" cluster
	I1216 20:46:33.700440   51593 pod_ready.go:103] pod "etcd-pause-022944" in "kube-system" namespace has status "Ready":"False"
	I1216 20:46:34.692633   51593 pod_ready.go:93] pod "etcd-pause-022944" in "kube-system" namespace has status "Ready":"True"
	I1216 20:46:34.692660   51593 pod_ready.go:82] duration metric: took 14.505609964s for pod "etcd-pause-022944" in "kube-system" namespace to be "Ready" ...
	I1216 20:46:34.692671   51593 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-022944" in "kube-system" namespace to be "Ready" ...
	I1216 20:46:34.696827   51593 pod_ready.go:93] pod "kube-apiserver-pause-022944" in "kube-system" namespace has status "Ready":"True"
	I1216 20:46:34.696847   51593 pod_ready.go:82] duration metric: took 4.170576ms for pod "kube-apiserver-pause-022944" in "kube-system" namespace to be "Ready" ...
	I1216 20:46:34.696856   51593 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-022944" in "kube-system" namespace to be "Ready" ...
	I1216 20:46:34.700774   51593 pod_ready.go:93] pod "kube-controller-manager-pause-022944" in "kube-system" namespace has status "Ready":"True"
	I1216 20:46:34.700805   51593 pod_ready.go:82] duration metric: took 3.942829ms for pod "kube-controller-manager-pause-022944" in "kube-system" namespace to be "Ready" ...
	I1216 20:46:34.700819   51593 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-lr8m7" in "kube-system" namespace to be "Ready" ...
	I1216 20:46:34.705799   51593 pod_ready.go:93] pod "kube-proxy-lr8m7" in "kube-system" namespace has status "Ready":"True"
	I1216 20:46:34.705825   51593 pod_ready.go:82] duration metric: took 4.998729ms for pod "kube-proxy-lr8m7" in "kube-system" namespace to be "Ready" ...
	I1216 20:46:34.705838   51593 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-022944" in "kube-system" namespace to be "Ready" ...
	I1216 20:46:34.711227   51593 pod_ready.go:93] pod "kube-scheduler-pause-022944" in "kube-system" namespace has status "Ready":"True"
	I1216 20:46:34.711266   51593 pod_ready.go:82] duration metric: took 5.419704ms for pod "kube-scheduler-pause-022944" in "kube-system" namespace to be "Ready" ...
	I1216 20:46:34.711277   51593 pod_ready.go:39] duration metric: took 14.534820704s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 20:46:34.711297   51593 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 20:46:34.725505   51593 ops.go:34] apiserver oom_adj: -16
	I1216 20:46:34.725543   51593 kubeadm.go:597] duration metric: took 26.57521221s to restartPrimaryControlPlane
	I1216 20:46:34.725556   51593 kubeadm.go:394] duration metric: took 26.776482005s to StartCluster
	I1216 20:46:34.725574   51593 settings.go:142] acquiring lock: {Name:mke62e1d1fa6bfae09410847a3fc6f95d0bbbd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:46:34.725636   51593 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 20:46:34.726295   51593 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/kubeconfig: {Name:mk67073c6dc9abd712825d4490d6430745897f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:46:34.726513   51593 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.189 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 20:46:34.726623   51593 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 20:46:34.726790   51593 config.go:182] Loaded profile config "pause-022944": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 20:46:34.729404   51593 out.go:177] * Verifying Kubernetes components...
	I1216 20:46:34.729401   51593 out.go:177] * Enabled addons: 
	I1216 20:46:33.795742   54373 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1216 20:46:33.795893   54373 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:46:33.795934   54373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:46:33.812578   54373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33513
	I1216 20:46:33.813128   54373 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:46:33.813738   54373 main.go:141] libmachine: Using API Version  1
	I1216 20:46:33.813772   54373 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:46:33.814201   54373 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:46:33.814420   54373 main.go:141] libmachine: (force-systemd-env-893512) Calling .GetMachineName
	I1216 20:46:33.814635   54373 main.go:141] libmachine: (force-systemd-env-893512) Calling .DriverName
	I1216 20:46:33.814775   54373 start.go:159] libmachine.API.Create for "force-systemd-env-893512" (driver="kvm2")
	I1216 20:46:33.814802   54373 client.go:168] LocalClient.Create starting
	I1216 20:46:33.814838   54373 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem
	I1216 20:46:33.814882   54373 main.go:141] libmachine: Decoding PEM data...
	I1216 20:46:33.814905   54373 main.go:141] libmachine: Parsing certificate...
	I1216 20:46:33.814966   54373 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem
	I1216 20:46:33.814998   54373 main.go:141] libmachine: Decoding PEM data...
	I1216 20:46:33.815015   54373 main.go:141] libmachine: Parsing certificate...
	I1216 20:46:33.815051   54373 main.go:141] libmachine: Running pre-create checks...
	I1216 20:46:33.815064   54373 main.go:141] libmachine: (force-systemd-env-893512) Calling .PreCreateCheck
	I1216 20:46:33.815473   54373 main.go:141] libmachine: (force-systemd-env-893512) Calling .GetConfigRaw
	I1216 20:46:33.815880   54373 main.go:141] libmachine: Creating machine...
	I1216 20:46:33.815893   54373 main.go:141] libmachine: (force-systemd-env-893512) Calling .Create
	I1216 20:46:33.816026   54373 main.go:141] libmachine: (force-systemd-env-893512) Creating KVM machine...
	I1216 20:46:33.817249   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | found existing default KVM network
	I1216 20:46:33.818760   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | I1216 20:46:33.818575   54425 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f5f0}
	I1216 20:46:33.818787   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | created network xml: 
	I1216 20:46:33.818798   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | <network>
	I1216 20:46:33.818807   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG |   <name>mk-force-systemd-env-893512</name>
	I1216 20:46:33.818826   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG |   <dns enable='no'/>
	I1216 20:46:33.818837   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG |   
	I1216 20:46:33.818851   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1216 20:46:33.818861   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG |     <dhcp>
	I1216 20:46:33.818875   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1216 20:46:33.818883   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG |     </dhcp>
	I1216 20:46:33.818918   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG |   </ip>
	I1216 20:46:33.818932   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG |   
	I1216 20:46:33.818944   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | </network>
	I1216 20:46:33.818954   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | 
	I1216 20:46:33.824388   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | trying to create private KVM network mk-force-systemd-env-893512 192.168.39.0/24...
	I1216 20:46:33.898632   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | private KVM network mk-force-systemd-env-893512 192.168.39.0/24 created
	I1216 20:46:33.898669   54373 main.go:141] libmachine: (force-systemd-env-893512) Setting up store path in /home/jenkins/minikube-integration/20091-7083/.minikube/machines/force-systemd-env-893512 ...
	I1216 20:46:33.898683   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | I1216 20:46:33.898612   54425 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/20091-7083/.minikube
	I1216 20:46:33.898697   54373 main.go:141] libmachine: (force-systemd-env-893512) Building disk image from file:///home/jenkins/minikube-integration/20091-7083/.minikube/cache/iso/amd64/minikube-v1.34.0-1734029574-20090-amd64.iso
	I1216 20:46:33.898847   54373 main.go:141] libmachine: (force-systemd-env-893512) Downloading /home/jenkins/minikube-integration/20091-7083/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20091-7083/.minikube/cache/iso/amd64/minikube-v1.34.0-1734029574-20090-amd64.iso...
	I1216 20:46:34.144443   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | I1216 20:46:34.144273   54425 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/force-systemd-env-893512/id_rsa...
	I1216 20:46:34.238180   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | I1216 20:46:34.238038   54425 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/force-systemd-env-893512/force-systemd-env-893512.rawdisk...
	I1216 20:46:34.238214   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | Writing magic tar header
	I1216 20:46:34.238232   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | Writing SSH key tar header
	I1216 20:46:34.238245   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | I1216 20:46:34.238164   54425 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/20091-7083/.minikube/machines/force-systemd-env-893512 ...
	I1216 20:46:34.238260   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/force-systemd-env-893512
	I1216 20:46:34.238327   54373 main.go:141] libmachine: (force-systemd-env-893512) Setting executable bit set on /home/jenkins/minikube-integration/20091-7083/.minikube/machines/force-systemd-env-893512 (perms=drwx------)
	I1216 20:46:34.238356   54373 main.go:141] libmachine: (force-systemd-env-893512) Setting executable bit set on /home/jenkins/minikube-integration/20091-7083/.minikube/machines (perms=drwxr-xr-x)
	I1216 20:46:34.238372   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20091-7083/.minikube/machines
	I1216 20:46:34.238388   54373 main.go:141] libmachine: (force-systemd-env-893512) Setting executable bit set on /home/jenkins/minikube-integration/20091-7083/.minikube (perms=drwxr-xr-x)
	I1216 20:46:34.238406   54373 main.go:141] libmachine: (force-systemd-env-893512) Setting executable bit set on /home/jenkins/minikube-integration/20091-7083 (perms=drwxrwxr-x)
	I1216 20:46:34.238421   54373 main.go:141] libmachine: (force-systemd-env-893512) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1216 20:46:34.238434   54373 main.go:141] libmachine: (force-systemd-env-893512) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1216 20:46:34.238450   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20091-7083/.minikube
	I1216 20:46:34.238477   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20091-7083
	I1216 20:46:34.238489   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1216 20:46:34.238503   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | Checking permissions on dir: /home/jenkins
	I1216 20:46:34.238515   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | Checking permissions on dir: /home
	I1216 20:46:34.238528   54373 main.go:141] libmachine: (force-systemd-env-893512) Creating domain...
	I1216 20:46:34.238541   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | Skipping /home - not owner
	I1216 20:46:34.239667   54373 main.go:141] libmachine: (force-systemd-env-893512) define libvirt domain using xml: 
	I1216 20:46:34.239693   54373 main.go:141] libmachine: (force-systemd-env-893512) <domain type='kvm'>
	I1216 20:46:34.239704   54373 main.go:141] libmachine: (force-systemd-env-893512)   <name>force-systemd-env-893512</name>
	I1216 20:46:34.239721   54373 main.go:141] libmachine: (force-systemd-env-893512)   <memory unit='MiB'>2048</memory>
	I1216 20:46:34.239734   54373 main.go:141] libmachine: (force-systemd-env-893512)   <vcpu>2</vcpu>
	I1216 20:46:34.239745   54373 main.go:141] libmachine: (force-systemd-env-893512)   <features>
	I1216 20:46:34.239756   54373 main.go:141] libmachine: (force-systemd-env-893512)     <acpi/>
	I1216 20:46:34.239765   54373 main.go:141] libmachine: (force-systemd-env-893512)     <apic/>
	I1216 20:46:34.239773   54373 main.go:141] libmachine: (force-systemd-env-893512)     <pae/>
	I1216 20:46:34.239779   54373 main.go:141] libmachine: (force-systemd-env-893512)     
	I1216 20:46:34.239789   54373 main.go:141] libmachine: (force-systemd-env-893512)   </features>
	I1216 20:46:34.239803   54373 main.go:141] libmachine: (force-systemd-env-893512)   <cpu mode='host-passthrough'>
	I1216 20:46:34.239814   54373 main.go:141] libmachine: (force-systemd-env-893512)   
	I1216 20:46:34.239825   54373 main.go:141] libmachine: (force-systemd-env-893512)   </cpu>
	I1216 20:46:34.239832   54373 main.go:141] libmachine: (force-systemd-env-893512)   <os>
	I1216 20:46:34.239843   54373 main.go:141] libmachine: (force-systemd-env-893512)     <type>hvm</type>
	I1216 20:46:34.239858   54373 main.go:141] libmachine: (force-systemd-env-893512)     <boot dev='cdrom'/>
	I1216 20:46:34.239867   54373 main.go:141] libmachine: (force-systemd-env-893512)     <boot dev='hd'/>
	I1216 20:46:34.239876   54373 main.go:141] libmachine: (force-systemd-env-893512)     <bootmenu enable='no'/>
	I1216 20:46:34.239888   54373 main.go:141] libmachine: (force-systemd-env-893512)   </os>
	I1216 20:46:34.239899   54373 main.go:141] libmachine: (force-systemd-env-893512)   <devices>
	I1216 20:46:34.239912   54373 main.go:141] libmachine: (force-systemd-env-893512)     <disk type='file' device='cdrom'>
	I1216 20:46:34.239927   54373 main.go:141] libmachine: (force-systemd-env-893512)       <source file='/home/jenkins/minikube-integration/20091-7083/.minikube/machines/force-systemd-env-893512/boot2docker.iso'/>
	I1216 20:46:34.239938   54373 main.go:141] libmachine: (force-systemd-env-893512)       <target dev='hdc' bus='scsi'/>
	I1216 20:46:34.239966   54373 main.go:141] libmachine: (force-systemd-env-893512)       <readonly/>
	I1216 20:46:34.239985   54373 main.go:141] libmachine: (force-systemd-env-893512)     </disk>
	I1216 20:46:34.239999   54373 main.go:141] libmachine: (force-systemd-env-893512)     <disk type='file' device='disk'>
	I1216 20:46:34.240017   54373 main.go:141] libmachine: (force-systemd-env-893512)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1216 20:46:34.240034   54373 main.go:141] libmachine: (force-systemd-env-893512)       <source file='/home/jenkins/minikube-integration/20091-7083/.minikube/machines/force-systemd-env-893512/force-systemd-env-893512.rawdisk'/>
	I1216 20:46:34.240052   54373 main.go:141] libmachine: (force-systemd-env-893512)       <target dev='hda' bus='virtio'/>
	I1216 20:46:34.240063   54373 main.go:141] libmachine: (force-systemd-env-893512)     </disk>
	I1216 20:46:34.240073   54373 main.go:141] libmachine: (force-systemd-env-893512)     <interface type='network'>
	I1216 20:46:34.240103   54373 main.go:141] libmachine: (force-systemd-env-893512)       <source network='mk-force-systemd-env-893512'/>
	I1216 20:46:34.240140   54373 main.go:141] libmachine: (force-systemd-env-893512)       <model type='virtio'/>
	I1216 20:46:34.240154   54373 main.go:141] libmachine: (force-systemd-env-893512)     </interface>
	I1216 20:46:34.240166   54373 main.go:141] libmachine: (force-systemd-env-893512)     <interface type='network'>
	I1216 20:46:34.240179   54373 main.go:141] libmachine: (force-systemd-env-893512)       <source network='default'/>
	I1216 20:46:34.240189   54373 main.go:141] libmachine: (force-systemd-env-893512)       <model type='virtio'/>
	I1216 20:46:34.240201   54373 main.go:141] libmachine: (force-systemd-env-893512)     </interface>
	I1216 20:46:34.240212   54373 main.go:141] libmachine: (force-systemd-env-893512)     <serial type='pty'>
	I1216 20:46:34.240222   54373 main.go:141] libmachine: (force-systemd-env-893512)       <target port='0'/>
	I1216 20:46:34.240231   54373 main.go:141] libmachine: (force-systemd-env-893512)     </serial>
	I1216 20:46:34.240244   54373 main.go:141] libmachine: (force-systemd-env-893512)     <console type='pty'>
	I1216 20:46:34.240255   54373 main.go:141] libmachine: (force-systemd-env-893512)       <target type='serial' port='0'/>
	I1216 20:46:34.240268   54373 main.go:141] libmachine: (force-systemd-env-893512)     </console>
	I1216 20:46:34.240283   54373 main.go:141] libmachine: (force-systemd-env-893512)     <rng model='virtio'>
	I1216 20:46:34.240295   54373 main.go:141] libmachine: (force-systemd-env-893512)       <backend model='random'>/dev/random</backend>
	I1216 20:46:34.240305   54373 main.go:141] libmachine: (force-systemd-env-893512)     </rng>
	I1216 20:46:34.240316   54373 main.go:141] libmachine: (force-systemd-env-893512)     
	I1216 20:46:34.240324   54373 main.go:141] libmachine: (force-systemd-env-893512)     
	I1216 20:46:34.240329   54373 main.go:141] libmachine: (force-systemd-env-893512)   </devices>
	I1216 20:46:34.240344   54373 main.go:141] libmachine: (force-systemd-env-893512) </domain>
	I1216 20:46:34.240395   54373 main.go:141] libmachine: (force-systemd-env-893512) 
	I1216 20:46:34.244681   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | domain force-systemd-env-893512 has defined MAC address 52:54:00:18:c7:95 in network default
	I1216 20:46:34.245290   54373 main.go:141] libmachine: (force-systemd-env-893512) Ensuring networks are active...
	I1216 20:46:34.245306   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | domain force-systemd-env-893512 has defined MAC address 52:54:00:83:6b:76 in network mk-force-systemd-env-893512
	I1216 20:46:34.246029   54373 main.go:141] libmachine: (force-systemd-env-893512) Ensuring network default is active
	I1216 20:46:34.246385   54373 main.go:141] libmachine: (force-systemd-env-893512) Ensuring network mk-force-systemd-env-893512 is active
	I1216 20:46:34.247072   54373 main.go:141] libmachine: (force-systemd-env-893512) Getting domain xml...
	I1216 20:46:34.247752   54373 main.go:141] libmachine: (force-systemd-env-893512) Creating domain...
	I1216 20:46:35.496722   54373 main.go:141] libmachine: (force-systemd-env-893512) Waiting to get IP...
	I1216 20:46:35.497651   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | domain force-systemd-env-893512 has defined MAC address 52:54:00:83:6b:76 in network mk-force-systemd-env-893512
	I1216 20:46:35.498135   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | unable to find current IP address of domain force-systemd-env-893512 in network mk-force-systemd-env-893512
	I1216 20:46:35.498170   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | I1216 20:46:35.498084   54425 retry.go:31] will retry after 292.329334ms: waiting for machine to come up
	I1216 20:46:35.791776   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | domain force-systemd-env-893512 has defined MAC address 52:54:00:83:6b:76 in network mk-force-systemd-env-893512
	I1216 20:46:35.792490   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | unable to find current IP address of domain force-systemd-env-893512 in network mk-force-systemd-env-893512
	I1216 20:46:35.792525   54373 main.go:141] libmachine: (force-systemd-env-893512) DBG | I1216 20:46:35.792418   54425 retry.go:31] will retry after 292.710823ms: waiting for machine to come up
	I1216 20:46:34.730776   51593 addons.go:510] duration metric: took 4.157918ms for enable addons: enabled=[]
	I1216 20:46:34.730875   51593 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:46:34.907809   51593 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 20:46:34.926314   51593 node_ready.go:35] waiting up to 6m0s for node "pause-022944" to be "Ready" ...
	I1216 20:46:34.929724   51593 node_ready.go:49] node "pause-022944" has status "Ready":"True"
	I1216 20:46:34.929743   51593 node_ready.go:38] duration metric: took 3.391669ms for node "pause-022944" to be "Ready" ...
	I1216 20:46:34.929752   51593 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 20:46:35.093912   51593 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-b94xm" in "kube-system" namespace to be "Ready" ...
	I1216 20:46:35.490867   51593 pod_ready.go:93] pod "coredns-668d6bf9bc-b94xm" in "kube-system" namespace has status "Ready":"True"
	I1216 20:46:35.490889   51593 pod_ready.go:82] duration metric: took 396.948764ms for pod "coredns-668d6bf9bc-b94xm" in "kube-system" namespace to be "Ready" ...
	I1216 20:46:35.490899   51593 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-022944" in "kube-system" namespace to be "Ready" ...
	I1216 20:46:35.891468   51593 pod_ready.go:93] pod "etcd-pause-022944" in "kube-system" namespace has status "Ready":"True"
	I1216 20:46:35.891495   51593 pod_ready.go:82] duration metric: took 400.589423ms for pod "etcd-pause-022944" in "kube-system" namespace to be "Ready" ...
	I1216 20:46:35.891505   51593 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-022944" in "kube-system" namespace to be "Ready" ...
	I1216 20:46:36.291023   51593 pod_ready.go:93] pod "kube-apiserver-pause-022944" in "kube-system" namespace has status "Ready":"True"
	I1216 20:46:36.291055   51593 pod_ready.go:82] duration metric: took 399.542627ms for pod "kube-apiserver-pause-022944" in "kube-system" namespace to be "Ready" ...
	I1216 20:46:36.291070   51593 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-022944" in "kube-system" namespace to be "Ready" ...
	I1216 20:46:36.691971   51593 pod_ready.go:93] pod "kube-controller-manager-pause-022944" in "kube-system" namespace has status "Ready":"True"
	I1216 20:46:36.692004   51593 pod_ready.go:82] duration metric: took 400.924036ms for pod "kube-controller-manager-pause-022944" in "kube-system" namespace to be "Ready" ...
	I1216 20:46:36.692019   51593 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lr8m7" in "kube-system" namespace to be "Ready" ...
	I1216 20:46:37.091274   51593 pod_ready.go:93] pod "kube-proxy-lr8m7" in "kube-system" namespace has status "Ready":"True"
	I1216 20:46:37.091309   51593 pod_ready.go:82] duration metric: took 399.280655ms for pod "kube-proxy-lr8m7" in "kube-system" namespace to be "Ready" ...
	I1216 20:46:37.091326   51593 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-022944" in "kube-system" namespace to be "Ready" ...
	I1216 20:46:33.810953   54398 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 20:46:33.811007   54398 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1216 20:46:33.811018   54398 cache.go:56] Caching tarball of preloaded images
	I1216 20:46:33.811093   54398 preload.go:172] Found /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 20:46:33.811106   54398 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1216 20:46:33.811182   54398 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/force-systemd-flag-406516/config.json ...
	I1216 20:46:33.811204   54398 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/force-systemd-flag-406516/config.json: {Name:mkeae737e8bdb60344cf74453abb58414d34d1af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:46:33.811398   54398 start.go:360] acquireMachinesLock for force-systemd-flag-406516: {Name:mk014ce1133f8d018fee1f78c9c31a354da6dd77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 20:46:37.491652   51593 pod_ready.go:93] pod "kube-scheduler-pause-022944" in "kube-system" namespace has status "Ready":"True"
	I1216 20:46:37.491676   51593 pod_ready.go:82] duration metric: took 400.341579ms for pod "kube-scheduler-pause-022944" in "kube-system" namespace to be "Ready" ...
	I1216 20:46:37.491684   51593 pod_ready.go:39] duration metric: took 2.561924189s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 20:46:37.491702   51593 api_server.go:52] waiting for apiserver process to appear ...
	I1216 20:46:37.491769   51593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:46:37.508224   51593 api_server.go:72] duration metric: took 2.781679737s to wait for apiserver process to appear ...
	I1216 20:46:37.508272   51593 api_server.go:88] waiting for apiserver healthz status ...
	I1216 20:46:37.508298   51593 api_server.go:253] Checking apiserver healthz at https://192.168.72.189:8443/healthz ...
	I1216 20:46:37.513421   51593 api_server.go:279] https://192.168.72.189:8443/healthz returned 200:
	ok
	I1216 20:46:37.514668   51593 api_server.go:141] control plane version: v1.32.0
	I1216 20:46:37.514696   51593 api_server.go:131] duration metric: took 6.416459ms to wait for apiserver health ...
	I1216 20:46:37.514707   51593 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 20:46:37.694204   51593 system_pods.go:59] 6 kube-system pods found
	I1216 20:46:37.694243   51593 system_pods.go:61] "coredns-668d6bf9bc-b94xm" [a8987996-bf4a-40e2-8d88-903aa9218b2e] Running
	I1216 20:46:37.694251   51593 system_pods.go:61] "etcd-pause-022944" [dc603cea-4e84-4391-b1f1-4517943407db] Running
	I1216 20:46:37.694256   51593 system_pods.go:61] "kube-apiserver-pause-022944" [be01bace-6a51-448e-9cec-c0a4ecfb62ff] Running
	I1216 20:46:37.694267   51593 system_pods.go:61] "kube-controller-manager-pause-022944" [dd2ed828-82fb-48f9-827a-3447b71f8182] Running
	I1216 20:46:37.694279   51593 system_pods.go:61] "kube-proxy-lr8m7" [669f5a14-2fec-4984-87d0-49e760d25372] Running
	I1216 20:46:37.694285   51593 system_pods.go:61] "kube-scheduler-pause-022944" [5958c4b7-5fb4-4afa-b1f3-a2ed1ab5ed0b] Running
	I1216 20:46:37.694293   51593 system_pods.go:74] duration metric: took 179.5778ms to wait for pod list to return data ...
	I1216 20:46:37.694304   51593 default_sa.go:34] waiting for default service account to be created ...
	I1216 20:46:37.890964   51593 default_sa.go:45] found service account: "default"
	I1216 20:46:37.890993   51593 default_sa.go:55] duration metric: took 196.684137ms for default service account to be created ...
	I1216 20:46:37.891005   51593 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 20:46:38.093900   51593 system_pods.go:86] 6 kube-system pods found
	I1216 20:46:38.093937   51593 system_pods.go:89] "coredns-668d6bf9bc-b94xm" [a8987996-bf4a-40e2-8d88-903aa9218b2e] Running
	I1216 20:46:38.093946   51593 system_pods.go:89] "etcd-pause-022944" [dc603cea-4e84-4391-b1f1-4517943407db] Running
	I1216 20:46:38.093953   51593 system_pods.go:89] "kube-apiserver-pause-022944" [be01bace-6a51-448e-9cec-c0a4ecfb62ff] Running
	I1216 20:46:38.093959   51593 system_pods.go:89] "kube-controller-manager-pause-022944" [dd2ed828-82fb-48f9-827a-3447b71f8182] Running
	I1216 20:46:38.093965   51593 system_pods.go:89] "kube-proxy-lr8m7" [669f5a14-2fec-4984-87d0-49e760d25372] Running
	I1216 20:46:38.093978   51593 system_pods.go:89] "kube-scheduler-pause-022944" [5958c4b7-5fb4-4afa-b1f3-a2ed1ab5ed0b] Running
	I1216 20:46:38.093989   51593 system_pods.go:126] duration metric: took 202.97716ms to wait for k8s-apps to be running ...
	I1216 20:46:38.093999   51593 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 20:46:38.094063   51593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 20:46:38.110340   51593 system_svc.go:56] duration metric: took 16.33053ms WaitForService to wait for kubelet
	I1216 20:46:38.110374   51593 kubeadm.go:582] duration metric: took 3.383837806s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 20:46:38.110392   51593 node_conditions.go:102] verifying NodePressure condition ...
	I1216 20:46:38.292883   51593 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 20:46:38.292916   51593 node_conditions.go:123] node cpu capacity is 2
	I1216 20:46:38.292929   51593 node_conditions.go:105] duration metric: took 182.531603ms to run NodePressure ...
	I1216 20:46:38.292946   51593 start.go:241] waiting for startup goroutines ...
	I1216 20:46:38.292956   51593 start.go:246] waiting for cluster config update ...
	I1216 20:46:38.292968   51593 start.go:255] writing updated cluster config ...
	I1216 20:46:38.293692   51593 ssh_runner.go:195] Run: rm -f paused
	I1216 20:46:38.357004   51593 start.go:600] kubectl: 1.32.0, cluster: 1.32.0 (minor skew: 0)
	I1216 20:46:38.359157   51593 out.go:177] * Done! kubectl is now configured to use "pause-022944" cluster and "default" namespace by default
	I1216 20:46:35.202269   49163 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 20:46:35.202487   49163 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
	
	==> CRI-O <==
	Dec 16 20:46:41 pause-022944 crio[2402]: time="2024-12-16 20:46:41.102965653Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734382001102940210,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125693,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4144bddd-d16a-462f-bf75-a1775a65cb96 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 20:46:41 pause-022944 crio[2402]: time="2024-12-16 20:46:41.103625299Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fc2049af-65de-4f2f-b4e3-d8d0661f740e name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 20:46:41 pause-022944 crio[2402]: time="2024-12-16 20:46:41.103698007Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fc2049af-65de-4f2f-b4e3-d8d0661f740e name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 20:46:41 pause-022944 crio[2402]: time="2024-12-16 20:46:41.103940762Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6094fa8f92769ca5adcf52a34975a0d9d8ea80958b4e324892b3d9bcd855da1a,PodSandboxId:5cc8aa32af5701f7149e556f970de9d66e7c1095594038936146e672bd6d952a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1734381975005138689,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 045a195272ef4d8827f64b18c63d184b,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552d84b32e77a1b0b6d23519d99b5de113ece3472570ed304f8f9bb47000fef1,PodSandboxId:b4126a0fd4256b8cb2ac481830ba81ac2c9db122fe0cd5cee5b7307051297b7b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1734381974927294888,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bda30d79cf96fd10e8a93666877b7f6f,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5086843e560485814f16eebdcc52c2b92b9d690f0f13a6729b91b43b1b54f608,PodSandboxId:e8d8b9f64f17b17cb413dab1b4cce7abc48906dc27fb52eeb6095ad197c13e10,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1734381974950166064,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e7529e5f8260add0e74e083668b37f,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d116e41e987f4b9c01c03319a8c9affb25d2191ee997274388351c27eaf6c8e9,PodSandboxId:59b7e7c1d457800efa0b6f4c37e6de20558f0e36fb3364268f94d7e9c5b1edf7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1734381974898020296,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751137e41a25750ea13a412704d360ef,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f07a18b183616dcc709257e2aba5cd6c282be5dd7813297819de30d3a1f82c1,PodSandboxId:6fb2babf6d89280b705753df1c09fb797d6ecc08c23bca347aadf3ab36b2975d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1734381967028596182,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lr8m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 669f5a14-2fec-4984-87d0-49e760d25372,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c9dd0f1fe4bdecb66009d1a58456510526837d73be0121e6ef269692ba6951b,PodSandboxId:fc6add71e5bcb034ec65c1fb72fdb66bc6375bf947fc692aebaa3a868102d543,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734381968121480304,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-b94xm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8987996-bf4a-40e2-8d88-903aa9218b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f92e6114c6560a93d90ee5508a764de4ba7821079c1f9681d049a5138837ea7,PodSandboxId:b4126a0fd4256b8cb2ac481830ba81ac2c9db122fe0cd5cee5b7307051297b7b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1734381967113369768,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bda30d79cf96fd10e8a93666877b7f6f,},Annotations:map[string]string{io.kubernetes.contain
er.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d9ac0f1ddcb695e316df090a4f64ea07294b5a9c21536e57422030b7ae06a6c,PodSandboxId:e8d8b9f64f17b17cb413dab1b4cce7abc48906dc27fb52eeb6095ad197c13e10,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_EXITED,CreatedAt:1734381967067891596,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e7529e5f8260add0e74e083668b37f,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,i
o.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c482cafbd8f9c2cfef74a63eb751850fb18dd35c0440d01c46655444ca1521cb,PodSandboxId:59b7e7c1d457800efa0b6f4c37e6de20558f0e36fb3364268f94d7e9c5b1edf7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1734381967039484686,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751137e41a25750ea13a412704d360ef,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee546aaecd2dd7603a85b67d467058f021900a40a5fa269124e39a6391103c98,PodSandboxId:5cc8aa32af5701f7149e556f970de9d66e7c1095594038936146e672bd6d952a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_EXITED,CreatedAt:1734381966949782400,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 045a195272ef4d8827f64b18c63d184b,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30617acf2d2801c7042c99e0465810bb69557a04c41a3c32ea43501e24cb939c,PodSandboxId:bde86b5873f4afbd6c182d4321830ba2f2498ef349dd6299fb7e3c9f96bc4016,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1734381911663434646,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-b94xm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8987996-bf4a-40e2-8d88-903aa9218b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dbb5fcf4e0f39c2e089d4ea0db7705b4166dd0ea1dd790a9a8ffefd961b6d07,PodSandboxId:7f04e2a758fac3d9f6bb076f860b47025127cc601c2311c41576cde5c2ba24ac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_EXITED,CreatedAt:1734381910491431494,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lr8m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 669f5a14-2fec-4984-87d0-49e760d25372,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fc2049af-65de-4f2f-b4e3-d8d0661f740e name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 20:46:41 pause-022944 crio[2402]: time="2024-12-16 20:46:41.149425773Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=669fa28e-2929-422e-ab82-0954a5f11b92 name=/runtime.v1.RuntimeService/Version
	Dec 16 20:46:41 pause-022944 crio[2402]: time="2024-12-16 20:46:41.149550622Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=669fa28e-2929-422e-ab82-0954a5f11b92 name=/runtime.v1.RuntimeService/Version
	Dec 16 20:46:41 pause-022944 crio[2402]: time="2024-12-16 20:46:41.150759132Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1b4fa57a-ca88-4fc3-9f60-1deb2c8d82e4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 20:46:41 pause-022944 crio[2402]: time="2024-12-16 20:46:41.151253249Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734382001151229076,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125693,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1b4fa57a-ca88-4fc3-9f60-1deb2c8d82e4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 20:46:41 pause-022944 crio[2402]: time="2024-12-16 20:46:41.151900741Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6a1491d2-5025-4f91-b78e-67f3edaff9e1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 20:46:41 pause-022944 crio[2402]: time="2024-12-16 20:46:41.151975253Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6a1491d2-5025-4f91-b78e-67f3edaff9e1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 20:46:41 pause-022944 crio[2402]: time="2024-12-16 20:46:41.152294783Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6094fa8f92769ca5adcf52a34975a0d9d8ea80958b4e324892b3d9bcd855da1a,PodSandboxId:5cc8aa32af5701f7149e556f970de9d66e7c1095594038936146e672bd6d952a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1734381975005138689,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 045a195272ef4d8827f64b18c63d184b,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552d84b32e77a1b0b6d23519d99b5de113ece3472570ed304f8f9bb47000fef1,PodSandboxId:b4126a0fd4256b8cb2ac481830ba81ac2c9db122fe0cd5cee5b7307051297b7b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1734381974927294888,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bda30d79cf96fd10e8a93666877b7f6f,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5086843e560485814f16eebdcc52c2b92b9d690f0f13a6729b91b43b1b54f608,PodSandboxId:e8d8b9f64f17b17cb413dab1b4cce7abc48906dc27fb52eeb6095ad197c13e10,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1734381974950166064,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e7529e5f8260add0e74e083668b37f,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d116e41e987f4b9c01c03319a8c9affb25d2191ee997274388351c27eaf6c8e9,PodSandboxId:59b7e7c1d457800efa0b6f4c37e6de20558f0e36fb3364268f94d7e9c5b1edf7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1734381974898020296,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751137e41a25750ea13a412704d360ef,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f07a18b183616dcc709257e2aba5cd6c282be5dd7813297819de30d3a1f82c1,PodSandboxId:6fb2babf6d89280b705753df1c09fb797d6ecc08c23bca347aadf3ab36b2975d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1734381967028596182,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lr8m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 669f5a14-2fec-4984-87d0-49e760d25372,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c9dd0f1fe4bdecb66009d1a58456510526837d73be0121e6ef269692ba6951b,PodSandboxId:fc6add71e5bcb034ec65c1fb72fdb66bc6375bf947fc692aebaa3a868102d543,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734381968121480304,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-b94xm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8987996-bf4a-40e2-8d88-903aa9218b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f92e6114c6560a93d90ee5508a764de4ba7821079c1f9681d049a5138837ea7,PodSandboxId:b4126a0fd4256b8cb2ac481830ba81ac2c9db122fe0cd5cee5b7307051297b7b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1734381967113369768,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bda30d79cf96fd10e8a93666877b7f6f,},Annotations:map[string]string{io.kubernetes.contain
er.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d9ac0f1ddcb695e316df090a4f64ea07294b5a9c21536e57422030b7ae06a6c,PodSandboxId:e8d8b9f64f17b17cb413dab1b4cce7abc48906dc27fb52eeb6095ad197c13e10,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_EXITED,CreatedAt:1734381967067891596,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e7529e5f8260add0e74e083668b37f,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,i
o.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c482cafbd8f9c2cfef74a63eb751850fb18dd35c0440d01c46655444ca1521cb,PodSandboxId:59b7e7c1d457800efa0b6f4c37e6de20558f0e36fb3364268f94d7e9c5b1edf7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1734381967039484686,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751137e41a25750ea13a412704d360ef,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee546aaecd2dd7603a85b67d467058f021900a40a5fa269124e39a6391103c98,PodSandboxId:5cc8aa32af5701f7149e556f970de9d66e7c1095594038936146e672bd6d952a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_EXITED,CreatedAt:1734381966949782400,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 045a195272ef4d8827f64b18c63d184b,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30617acf2d2801c7042c99e0465810bb69557a04c41a3c32ea43501e24cb939c,PodSandboxId:bde86b5873f4afbd6c182d4321830ba2f2498ef349dd6299fb7e3c9f96bc4016,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1734381911663434646,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-b94xm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8987996-bf4a-40e2-8d88-903aa9218b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dbb5fcf4e0f39c2e089d4ea0db7705b4166dd0ea1dd790a9a8ffefd961b6d07,PodSandboxId:7f04e2a758fac3d9f6bb076f860b47025127cc601c2311c41576cde5c2ba24ac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_EXITED,CreatedAt:1734381910491431494,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lr8m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 669f5a14-2fec-4984-87d0-49e760d25372,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6a1491d2-5025-4f91-b78e-67f3edaff9e1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 20:46:41 pause-022944 crio[2402]: time="2024-12-16 20:46:41.204148647Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a48bca18-10c0-4b1e-8335-23e4be17e1d9 name=/runtime.v1.RuntimeService/Version
	Dec 16 20:46:41 pause-022944 crio[2402]: time="2024-12-16 20:46:41.204290370Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a48bca18-10c0-4b1e-8335-23e4be17e1d9 name=/runtime.v1.RuntimeService/Version
	Dec 16 20:46:41 pause-022944 crio[2402]: time="2024-12-16 20:46:41.208853363Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=67ca8eb5-8eb1-49e7-9f91-af0f84802812 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 20:46:41 pause-022944 crio[2402]: time="2024-12-16 20:46:41.209587129Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734382001209545631,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125693,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=67ca8eb5-8eb1-49e7-9f91-af0f84802812 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 20:46:41 pause-022944 crio[2402]: time="2024-12-16 20:46:41.210453861Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5687df0b-4d35-4b01-bac0-03ce2d9d9135 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 20:46:41 pause-022944 crio[2402]: time="2024-12-16 20:46:41.210555727Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5687df0b-4d35-4b01-bac0-03ce2d9d9135 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 20:46:41 pause-022944 crio[2402]: time="2024-12-16 20:46:41.211117400Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6094fa8f92769ca5adcf52a34975a0d9d8ea80958b4e324892b3d9bcd855da1a,PodSandboxId:5cc8aa32af5701f7149e556f970de9d66e7c1095594038936146e672bd6d952a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1734381975005138689,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 045a195272ef4d8827f64b18c63d184b,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552d84b32e77a1b0b6d23519d99b5de113ece3472570ed304f8f9bb47000fef1,PodSandboxId:b4126a0fd4256b8cb2ac481830ba81ac2c9db122fe0cd5cee5b7307051297b7b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1734381974927294888,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bda30d79cf96fd10e8a93666877b7f6f,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5086843e560485814f16eebdcc52c2b92b9d690f0f13a6729b91b43b1b54f608,PodSandboxId:e8d8b9f64f17b17cb413dab1b4cce7abc48906dc27fb52eeb6095ad197c13e10,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1734381974950166064,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e7529e5f8260add0e74e083668b37f,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d116e41e987f4b9c01c03319a8c9affb25d2191ee997274388351c27eaf6c8e9,PodSandboxId:59b7e7c1d457800efa0b6f4c37e6de20558f0e36fb3364268f94d7e9c5b1edf7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1734381974898020296,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751137e41a25750ea13a412704d360ef,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f07a18b183616dcc709257e2aba5cd6c282be5dd7813297819de30d3a1f82c1,PodSandboxId:6fb2babf6d89280b705753df1c09fb797d6ecc08c23bca347aadf3ab36b2975d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1734381967028596182,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lr8m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 669f5a14-2fec-4984-87d0-49e760d25372,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c9dd0f1fe4bdecb66009d1a58456510526837d73be0121e6ef269692ba6951b,PodSandboxId:fc6add71e5bcb034ec65c1fb72fdb66bc6375bf947fc692aebaa3a868102d543,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734381968121480304,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-b94xm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8987996-bf4a-40e2-8d88-903aa9218b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f92e6114c6560a93d90ee5508a764de4ba7821079c1f9681d049a5138837ea7,PodSandboxId:b4126a0fd4256b8cb2ac481830ba81ac2c9db122fe0cd5cee5b7307051297b7b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1734381967113369768,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bda30d79cf96fd10e8a93666877b7f6f,},Annotations:map[string]string{io.kubernetes.contain
er.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d9ac0f1ddcb695e316df090a4f64ea07294b5a9c21536e57422030b7ae06a6c,PodSandboxId:e8d8b9f64f17b17cb413dab1b4cce7abc48906dc27fb52eeb6095ad197c13e10,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_EXITED,CreatedAt:1734381967067891596,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e7529e5f8260add0e74e083668b37f,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,i
o.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c482cafbd8f9c2cfef74a63eb751850fb18dd35c0440d01c46655444ca1521cb,PodSandboxId:59b7e7c1d457800efa0b6f4c37e6de20558f0e36fb3364268f94d7e9c5b1edf7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1734381967039484686,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751137e41a25750ea13a412704d360ef,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee546aaecd2dd7603a85b67d467058f021900a40a5fa269124e39a6391103c98,PodSandboxId:5cc8aa32af5701f7149e556f970de9d66e7c1095594038936146e672bd6d952a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_EXITED,CreatedAt:1734381966949782400,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 045a195272ef4d8827f64b18c63d184b,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30617acf2d2801c7042c99e0465810bb69557a04c41a3c32ea43501e24cb939c,PodSandboxId:bde86b5873f4afbd6c182d4321830ba2f2498ef349dd6299fb7e3c9f96bc4016,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1734381911663434646,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-b94xm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8987996-bf4a-40e2-8d88-903aa9218b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dbb5fcf4e0f39c2e089d4ea0db7705b4166dd0ea1dd790a9a8ffefd961b6d07,PodSandboxId:7f04e2a758fac3d9f6bb076f860b47025127cc601c2311c41576cde5c2ba24ac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_EXITED,CreatedAt:1734381910491431494,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lr8m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 669f5a14-2fec-4984-87d0-49e760d25372,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5687df0b-4d35-4b01-bac0-03ce2d9d9135 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 20:46:41 pause-022944 crio[2402]: time="2024-12-16 20:46:41.257370050Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c572f47e-099d-4c75-bcfe-521a596a8a72 name=/runtime.v1.RuntimeService/Version
	Dec 16 20:46:41 pause-022944 crio[2402]: time="2024-12-16 20:46:41.257501377Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c572f47e-099d-4c75-bcfe-521a596a8a72 name=/runtime.v1.RuntimeService/Version
	Dec 16 20:46:41 pause-022944 crio[2402]: time="2024-12-16 20:46:41.258499312Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eb80e8ab-0475-4e6f-af23-792e7a5587e7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 20:46:41 pause-022944 crio[2402]: time="2024-12-16 20:46:41.258888178Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734382001258854122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125693,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eb80e8ab-0475-4e6f-af23-792e7a5587e7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 20:46:41 pause-022944 crio[2402]: time="2024-12-16 20:46:41.259694727Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=28bd85e2-1be3-4bc3-8a9e-bf9448f9cbe5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 20:46:41 pause-022944 crio[2402]: time="2024-12-16 20:46:41.259773856Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=28bd85e2-1be3-4bc3-8a9e-bf9448f9cbe5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 20:46:41 pause-022944 crio[2402]: time="2024-12-16 20:46:41.260028546Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6094fa8f92769ca5adcf52a34975a0d9d8ea80958b4e324892b3d9bcd855da1a,PodSandboxId:5cc8aa32af5701f7149e556f970de9d66e7c1095594038936146e672bd6d952a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1734381975005138689,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 045a195272ef4d8827f64b18c63d184b,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552d84b32e77a1b0b6d23519d99b5de113ece3472570ed304f8f9bb47000fef1,PodSandboxId:b4126a0fd4256b8cb2ac481830ba81ac2c9db122fe0cd5cee5b7307051297b7b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1734381974927294888,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bda30d79cf96fd10e8a93666877b7f6f,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5086843e560485814f16eebdcc52c2b92b9d690f0f13a6729b91b43b1b54f608,PodSandboxId:e8d8b9f64f17b17cb413dab1b4cce7abc48906dc27fb52eeb6095ad197c13e10,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1734381974950166064,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e7529e5f8260add0e74e083668b37f,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d116e41e987f4b9c01c03319a8c9affb25d2191ee997274388351c27eaf6c8e9,PodSandboxId:59b7e7c1d457800efa0b6f4c37e6de20558f0e36fb3364268f94d7e9c5b1edf7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1734381974898020296,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751137e41a25750ea13a412704d360ef,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f07a18b183616dcc709257e2aba5cd6c282be5dd7813297819de30d3a1f82c1,PodSandboxId:6fb2babf6d89280b705753df1c09fb797d6ecc08c23bca347aadf3ab36b2975d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1734381967028596182,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lr8m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 669f5a14-2fec-4984-87d0-49e760d25372,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c9dd0f1fe4bdecb66009d1a58456510526837d73be0121e6ef269692ba6951b,PodSandboxId:fc6add71e5bcb034ec65c1fb72fdb66bc6375bf947fc692aebaa3a868102d543,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734381968121480304,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-b94xm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8987996-bf4a-40e2-8d88-903aa9218b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f92e6114c6560a93d90ee5508a764de4ba7821079c1f9681d049a5138837ea7,PodSandboxId:b4126a0fd4256b8cb2ac481830ba81ac2c9db122fe0cd5cee5b7307051297b7b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1734381967113369768,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bda30d79cf96fd10e8a93666877b7f6f,},Annotations:map[string]string{io.kubernetes.contain
er.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d9ac0f1ddcb695e316df090a4f64ea07294b5a9c21536e57422030b7ae06a6c,PodSandboxId:e8d8b9f64f17b17cb413dab1b4cce7abc48906dc27fb52eeb6095ad197c13e10,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_EXITED,CreatedAt:1734381967067891596,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e7529e5f8260add0e74e083668b37f,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,i
o.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c482cafbd8f9c2cfef74a63eb751850fb18dd35c0440d01c46655444ca1521cb,PodSandboxId:59b7e7c1d457800efa0b6f4c37e6de20558f0e36fb3364268f94d7e9c5b1edf7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_EXITED,CreatedAt:1734381967039484686,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 751137e41a25750ea13a412704d360ef,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee546aaecd2dd7603a85b67d467058f021900a40a5fa269124e39a6391103c98,PodSandboxId:5cc8aa32af5701f7149e556f970de9d66e7c1095594038936146e672bd6d952a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_EXITED,CreatedAt:1734381966949782400,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-022944,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 045a195272ef4d8827f64b18c63d184b,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30617acf2d2801c7042c99e0465810bb69557a04c41a3c32ea43501e24cb939c,PodSandboxId:bde86b5873f4afbd6c182d4321830ba2f2498ef349dd6299fb7e3c9f96bc4016,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1734381911663434646,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-b94xm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8987996-bf4a-40e2-8d88-903aa9218b2e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dbb5fcf4e0f39c2e089d4ea0db7705b4166dd0ea1dd790a9a8ffefd961b6d07,PodSandboxId:7f04e2a758fac3d9f6bb076f860b47025127cc601c2311c41576cde5c2ba24ac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_EXITED,CreatedAt:1734381910491431494,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lr8m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 669f5a14-2fec-4984-87d0-49e760d25372,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=28bd85e2-1be3-4bc3-8a9e-bf9448f9cbe5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	6094fa8f92769       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   26 seconds ago       Running             kube-controller-manager   2                   5cc8aa32af570       kube-controller-manager-pause-022944
	5086843e56048       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   26 seconds ago       Running             kube-scheduler            2                   e8d8b9f64f17b       kube-scheduler-pause-022944
	552d84b32e77a       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   26 seconds ago       Running             kube-apiserver            2                   b4126a0fd4256       kube-apiserver-pause-022944
	d116e41e987f4       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   26 seconds ago       Running             etcd                      2                   59b7e7c1d4578       etcd-pause-022944
	8c9dd0f1fe4bd       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   33 seconds ago       Running             coredns                   1                   fc6add71e5bcb       coredns-668d6bf9bc-b94xm
	3f92e6114c656       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   34 seconds ago       Exited              kube-apiserver            1                   b4126a0fd4256       kube-apiserver-pause-022944
	4d9ac0f1ddcb6       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   34 seconds ago       Exited              kube-scheduler            1                   e8d8b9f64f17b       kube-scheduler-pause-022944
	c482cafbd8f9c       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   34 seconds ago       Exited              etcd                      1                   59b7e7c1d4578       etcd-pause-022944
	7f07a18b18361       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   34 seconds ago       Running             kube-proxy                1                   6fb2babf6d892       kube-proxy-lr8m7
	ee546aaecd2dd       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   34 seconds ago       Exited              kube-controller-manager   1                   5cc8aa32af570       kube-controller-manager-pause-022944
	30617acf2d280       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   0                   bde86b5873f4a       coredns-668d6bf9bc-b94xm
	5dbb5fcf4e0f3       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   About a minute ago   Exited              kube-proxy                0                   7f04e2a758fac       kube-proxy-lr8m7
	
	
	==> coredns [30617acf2d2801c7042c99e0465810bb69557a04c41a3c32ea43501e24cb939c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/kubernetes: Trace[487060525]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Dec-2024 20:45:12.034) (total time: 29578ms):
	Trace[487060525]: ---"Objects listed" error:<nil> 29578ms (20:45:41.613)
	Trace[487060525]: [29.578832566s] [29.578832566s] END
	[INFO] plugin/kubernetes: Trace[346612538]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Dec-2024 20:45:12.029) (total time: 29584ms):
	Trace[346612538]: ---"Objects listed" error:<nil> 29584ms (20:45:41.613)
	Trace[346612538]: [29.584722103s] [29.584722103s] END
	[INFO] plugin/kubernetes: Trace[31414365]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Dec-2024 20:45:12.034) (total time: 29579ms):
	Trace[31414365]: ---"Objects listed" error:<nil> 29579ms (20:45:41.614)
	Trace[31414365]: [29.579975093s] [29.579975093s] END
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	[INFO] Reloading complete
	[INFO] 127.0.0.1:53038 - 61622 "HINFO IN 8161388205809105035.1151591015590456726. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.046913983s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8c9dd0f1fe4bdecb66009d1a58456510526837d73be0121e6ef269692ba6951b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] 127.0.0.1:56759 - 75 "HINFO IN 7367595749151626410.4636715588362668321. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027954479s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> describe nodes <==
	Name:               pause-022944
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-022944
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=74e51ab701402ddc00f8ba70f2a2775c7dcd6477
	                    minikube.k8s.io/name=pause-022944
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_16T20_45_06_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Dec 2024 20:45:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-022944
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Dec 2024 20:46:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Dec 2024 20:46:18 +0000   Mon, 16 Dec 2024 20:45:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Dec 2024 20:46:18 +0000   Mon, 16 Dec 2024 20:45:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Dec 2024 20:46:18 +0000   Mon, 16 Dec 2024 20:45:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Dec 2024 20:46:18 +0000   Mon, 16 Dec 2024 20:45:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.189
	  Hostname:    pause-022944
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 c03fe8ee6686421d97e89d57e2c72201
	  System UUID:                c03fe8ee-6686-421d-97e8-9d57e2c72201
	  Boot ID:                    0f74cc51-9a62-4fac-879f-d09f143e28d3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-b94xm                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     91s
	  kube-system                 etcd-pause-022944                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         96s
	  kube-system                 kube-apiserver-pause-022944             250m (12%)    0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-controller-manager-pause-022944    200m (10%)    0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-proxy-lr8m7                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-scheduler-pause-022944             100m (5%)     0 (0%)      0 (0%)           0 (0%)         97s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 90s                kube-proxy       
	  Normal  Starting                 21s                kube-proxy       
	  Normal  NodeHasSufficientPID     96s                kubelet          Node pause-022944 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  96s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  96s                kubelet          Node pause-022944 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    96s                kubelet          Node pause-022944 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 96s                kubelet          Starting kubelet.
	  Normal  NodeReady                95s                kubelet          Node pause-022944 status is now: NodeReady
	  Normal  RegisteredNode           92s                node-controller  Node pause-022944 event: Registered Node pause-022944 in Controller
	  Normal  Starting                 27s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27s (x8 over 27s)  kubelet          Node pause-022944 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s (x8 over 27s)  kubelet          Node pause-022944 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s (x7 over 27s)  kubelet          Node pause-022944 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20s                node-controller  Node pause-022944 event: Registered Node pause-022944 in Controller
	
	
	==> dmesg <==
	[  +0.058906] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.079445] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.219052] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.139162] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.296069] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +4.401628] systemd-fstab-generator[745]: Ignoring "noauto" option for root device
	[  +0.073434] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.061570] systemd-fstab-generator[877]: Ignoring "noauto" option for root device
	[  +1.234950] kauditd_printk_skb: 57 callbacks suppressed
	[Dec16 20:45] systemd-fstab-generator[1237]: Ignoring "noauto" option for root device
	[  +0.088409] kauditd_printk_skb: 30 callbacks suppressed
	[  +4.973344] systemd-fstab-generator[1373]: Ignoring "noauto" option for root device
	[  +0.054865] kauditd_printk_skb: 21 callbacks suppressed
	[ +11.817898] kauditd_printk_skb: 88 callbacks suppressed
	[ +37.133621] systemd-fstab-generator[2328]: Ignoring "noauto" option for root device
	[  +0.150066] systemd-fstab-generator[2340]: Ignoring "noauto" option for root device
	[  +0.186534] systemd-fstab-generator[2354]: Ignoring "noauto" option for root device
	[  +0.146567] systemd-fstab-generator[2366]: Ignoring "noauto" option for root device
	[  +0.283910] systemd-fstab-generator[2394]: Ignoring "noauto" option for root device
	[Dec16 20:46] systemd-fstab-generator[2522]: Ignoring "noauto" option for root device
	[  +0.099392] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.507514] kauditd_printk_skb: 85 callbacks suppressed
	[  +2.120046] systemd-fstab-generator[3308]: Ignoring "noauto" option for root device
	[  +5.946334] kauditd_printk_skb: 40 callbacks suppressed
	[ +14.853879] systemd-fstab-generator[3675]: Ignoring "noauto" option for root device
	
	
	==> etcd [c482cafbd8f9c2cfef74a63eb751850fb18dd35c0440d01c46655444ca1521cb] <==
	{"level":"info","ts":"2024-12-16T20:46:09.330474Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"387e2109401c13dc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-12-16T20:46:09.330545Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"387e2109401c13dc received MsgPreVoteResp from 387e2109401c13dc at term 2"}
	{"level":"info","ts":"2024-12-16T20:46:09.330591Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"387e2109401c13dc became candidate at term 3"}
	{"level":"info","ts":"2024-12-16T20:46:09.330630Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"387e2109401c13dc received MsgVoteResp from 387e2109401c13dc at term 3"}
	{"level":"info","ts":"2024-12-16T20:46:09.330659Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"387e2109401c13dc became leader at term 3"}
	{"level":"info","ts":"2024-12-16T20:46:09.330685Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 387e2109401c13dc elected leader 387e2109401c13dc at term 3"}
	{"level":"info","ts":"2024-12-16T20:46:09.335902Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"387e2109401c13dc","local-member-attributes":"{Name:pause-022944 ClientURLs:[https://192.168.72.189:2379]}","request-path":"/0/members/387e2109401c13dc/attributes","cluster-id":"69ad83e0a7175c67","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-16T20:46:09.339203Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-16T20:46:09.339820Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-16T20:46:09.346128Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-16T20:46:09.346718Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-16T20:46:09.340103Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-16T20:46:09.348219Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-16T20:46:09.348721Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-16T20:46:09.349737Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.189:2379"}
	{"level":"info","ts":"2024-12-16T20:46:12.337260Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-12-16T20:46:12.337331Z","caller":"embed/etcd.go:378","msg":"closing etcd server","name":"pause-022944","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.189:2380"],"advertise-client-urls":["https://192.168.72.189:2379"]}
	{"level":"warn","ts":"2024-12-16T20:46:12.337411Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-16T20:46:12.337457Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-16T20:46:12.337532Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.72.189:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-16T20:46:12.337541Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.72.189:2379: use of closed network connection"}
	{"level":"info","ts":"2024-12-16T20:46:12.339165Z","caller":"etcdserver/server.go:1543","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"387e2109401c13dc","current-leader-member-id":"387e2109401c13dc"}
	{"level":"info","ts":"2024-12-16T20:46:12.343142Z","caller":"embed/etcd.go:582","msg":"stopping serving peer traffic","address":"192.168.72.189:2380"}
	{"level":"info","ts":"2024-12-16T20:46:12.343311Z","caller":"embed/etcd.go:587","msg":"stopped serving peer traffic","address":"192.168.72.189:2380"}
	{"level":"info","ts":"2024-12-16T20:46:12.343324Z","caller":"embed/etcd.go:380","msg":"closed etcd server","name":"pause-022944","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.189:2380"],"advertise-client-urls":["https://192.168.72.189:2379"]}
	
	
	==> etcd [d116e41e987f4b9c01c03319a8c9affb25d2191ee997274388351c27eaf6c8e9] <==
	{"level":"info","ts":"2024-12-16T20:46:15.321598Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"387e2109401c13dc switched to configuration voters=(4070727436803511260)"}
	{"level":"info","ts":"2024-12-16T20:46:15.321648Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"69ad83e0a7175c67","local-member-id":"387e2109401c13dc","added-peer-id":"387e2109401c13dc","added-peer-peer-urls":["https://192.168.72.189:2380"]}
	{"level":"info","ts":"2024-12-16T20:46:15.321728Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"69ad83e0a7175c67","local-member-id":"387e2109401c13dc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-16T20:46:15.321748Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-16T20:46:15.325649Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-16T20:46:15.326254Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"387e2109401c13dc","initial-advertise-peer-urls":["https://192.168.72.189:2380"],"listen-peer-urls":["https://192.168.72.189:2380"],"advertise-client-urls":["https://192.168.72.189:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.189:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-16T20:46:15.326545Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-16T20:46:15.325865Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.72.189:2380"}
	{"level":"info","ts":"2024-12-16T20:46:15.332123Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.72.189:2380"}
	{"level":"info","ts":"2024-12-16T20:46:16.482154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"387e2109401c13dc is starting a new election at term 3"}
	{"level":"info","ts":"2024-12-16T20:46:16.482268Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"387e2109401c13dc became pre-candidate at term 3"}
	{"level":"info","ts":"2024-12-16T20:46:16.482313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"387e2109401c13dc received MsgPreVoteResp from 387e2109401c13dc at term 3"}
	{"level":"info","ts":"2024-12-16T20:46:16.482342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"387e2109401c13dc became candidate at term 4"}
	{"level":"info","ts":"2024-12-16T20:46:16.482363Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"387e2109401c13dc received MsgVoteResp from 387e2109401c13dc at term 4"}
	{"level":"info","ts":"2024-12-16T20:46:16.482384Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"387e2109401c13dc became leader at term 4"}
	{"level":"info","ts":"2024-12-16T20:46:16.482402Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 387e2109401c13dc elected leader 387e2109401c13dc at term 4"}
	{"level":"info","ts":"2024-12-16T20:46:16.488433Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"387e2109401c13dc","local-member-attributes":"{Name:pause-022944 ClientURLs:[https://192.168.72.189:2379]}","request-path":"/0/members/387e2109401c13dc/attributes","cluster-id":"69ad83e0a7175c67","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-16T20:46:16.488583Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-16T20:46:16.489035Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-16T20:46:16.489690Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-16T20:46:16.492833Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-16T20:46:16.493480Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-16T20:46:16.528479Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.189:2379"}
	{"level":"info","ts":"2024-12-16T20:46:16.495166Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-16T20:46:16.528931Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 20:46:41 up 2 min,  0 users,  load average: 1.25, 0.54, 0.20
	Linux pause-022944 5.10.207 #1 SMP Thu Dec 12 23:38:00 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3f92e6114c6560a93d90ee5508a764de4ba7821079c1f9681d049a5138837ea7] <==
	I1216 20:46:11.094319       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1216 20:46:11.094352       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for crd-autoregister" logger="UnhandledError"
	I1216 20:46:11.095250       1 crd_finalizer.go:273] Shutting down CRDFinalizer
	I1216 20:46:11.095330       1 apiapproval_controller.go:193] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I1216 20:46:11.095353       1 nonstructuralschema_controller.go:199] Shutting down NonStructuralSchemaConditionController
	I1216 20:46:11.095374       1 establishing_controller.go:85] Shutting down EstablishingController
	I1216 20:46:11.095396       1 crdregistration_controller.go:119] Shutting down crd-autoregister controller
	I1216 20:46:11.095420       1 naming_controller.go:298] Shutting down NamingConditionController
	E1216 20:46:11.095450       1 controller.go:95] "Unhandled Error" err="timed out waiting for caches to sync" logger="UnhandledError"
	E1216 20:46:11.095477       1 controller.go:148] "Unhandled Error" err="timed out waiting for caches to sync" logger="UnhandledError"
	E1216 20:46:11.096170       1 cache.go:35] "Unhandled Error" err="Unable to sync caches for APIServiceRegistrationController controller" logger="UnhandledError"
	I1216 20:46:11.096256       1 controller.go:84] Shutting down OpenAPI AggregationController
	E1216 20:46:11.096288       1 cache.go:35] "Unhandled Error" err="Unable to sync caches for LocalAvailability controller" logger="UnhandledError"
	F1216 20:46:11.096310       1 hooks.go:204] PostStartHook "crd-informer-synced" failed: timed out waiting for the condition
	E1216 20:46:11.166314       1 gc_controller.go:84] "Unhandled Error" err="timed out waiting for caches to sync" logger="UnhandledError"
	I1216 20:46:11.169227       1 gc_controller.go:85] Shutting down apiserver lease garbage collector
	I1216 20:46:11.169316       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	E1216 20:46:11.169379       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for cluster_authentication_trust_controller" logger="UnhandledError"
	I1216 20:46:11.169422       1 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E1216 20:46:11.169492       1 cache.go:35] "Unhandled Error" err="Unable to sync caches for RemoteAvailability controller" logger="UnhandledError"
	I1216 20:46:11.169541       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	E1216 20:46:11.169588       1 customresource_discovery_controller.go:295] "Unhandled Error" err="timed out waiting for caches to sync" logger="UnhandledError"
	I1216 20:46:11.169620       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	E1216 20:46:11.169652       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for configmaps" logger="UnhandledError"
	E1216 20:46:11.169683       1 controller.go:89] "Unhandled Error" err="timed out waiting for caches to sync" logger="UnhandledError"
	
	
	==> kube-apiserver [552d84b32e77a1b0b6d23519d99b5de113ece3472570ed304f8f9bb47000fef1] <==
	I1216 20:46:18.208565       1 shared_informer.go:320] Caches are synced for configmaps
	I1216 20:46:18.208642       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1216 20:46:18.211896       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1216 20:46:18.212609       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1216 20:46:18.219956       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1216 20:46:18.220374       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1216 20:46:18.246332       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1216 20:46:18.223465       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1216 20:46:18.240423       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1216 20:46:18.253969       1 policy_source.go:240] refreshing policies
	I1216 20:46:18.254118       1 aggregator.go:171] initial CRD sync complete...
	I1216 20:46:18.254167       1 autoregister_controller.go:144] Starting autoregister controller
	I1216 20:46:18.254195       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1216 20:46:18.254216       1 cache.go:39] Caches are synced for autoregister controller
	I1216 20:46:18.246276       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1216 20:46:18.270097       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 20:46:18.415932       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1216 20:46:19.108233       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1216 20:46:19.991572       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1216 20:46:20.036958       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1216 20:46:20.070996       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 20:46:20.080402       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 20:46:21.619947       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1216 20:46:21.668429       1 controller.go:615] quota admission added evaluator for: endpoints
	I1216 20:46:22.086698       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [6094fa8f92769ca5adcf52a34975a0d9d8ea80958b4e324892b3d9bcd855da1a] <==
	I1216 20:46:21.418169       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I1216 20:46:21.418285       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1216 20:46:21.418338       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I1216 20:46:21.421341       1 shared_informer.go:320] Caches are synced for node
	I1216 20:46:21.421394       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1216 20:46:21.421416       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1216 20:46:21.421420       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1216 20:46:21.421424       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1216 20:46:21.421491       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-022944"
	I1216 20:46:21.427401       1 shared_informer.go:320] Caches are synced for resource quota
	I1216 20:46:21.429926       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1216 20:46:21.441258       1 shared_informer.go:320] Caches are synced for resource quota
	I1216 20:46:21.452708       1 shared_informer.go:320] Caches are synced for garbage collector
	I1216 20:46:21.452791       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1216 20:46:21.452804       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1216 20:46:21.458595       1 shared_informer.go:320] Caches are synced for stateful set
	I1216 20:46:21.459913       1 shared_informer.go:320] Caches are synced for expand
	I1216 20:46:21.461962       1 shared_informer.go:320] Caches are synced for endpoint
	I1216 20:46:21.464020       1 shared_informer.go:320] Caches are synced for deployment
	I1216 20:46:21.464940       1 shared_informer.go:320] Caches are synced for persistent volume
	I1216 20:46:21.490943       1 shared_informer.go:320] Caches are synced for garbage collector
	I1216 20:46:22.094301       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="72.320591ms"
	I1216 20:46:22.094976       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="73.904µs"
	I1216 20:46:22.118145       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="22.602068ms"
	I1216 20:46:22.118636       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="65.152µs"
	
	
	==> kube-controller-manager [ee546aaecd2dd7603a85b67d467058f021900a40a5fa269124e39a6391103c98] <==
	I1216 20:46:09.093772       1 serving.go:386] Generated self-signed cert in-memory
	I1216 20:46:09.342531       1 controllermanager.go:185] "Starting" version="v1.32.0"
	I1216 20:46:09.342635       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 20:46:09.345043       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1216 20:46:09.347461       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1216 20:46:09.347540       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1216 20:46:09.347602       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-proxy [5dbb5fcf4e0f39c2e089d4ea0db7705b4166dd0ea1dd790a9a8ffefd961b6d07] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1216 20:45:11.186917       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1216 20:45:11.292844       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.72.189"]
	E1216 20:45:11.292941       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 20:45:11.446358       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1216 20:45:11.446458       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1216 20:45:11.446523       1 server_linux.go:170] "Using iptables Proxier"
	I1216 20:45:11.466193       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 20:45:11.466825       1 server.go:497] "Version info" version="v1.32.0"
	I1216 20:45:11.466838       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 20:45:11.471481       1 config.go:199] "Starting service config controller"
	I1216 20:45:11.471527       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1216 20:45:11.471554       1 config.go:105] "Starting endpoint slice config controller"
	I1216 20:45:11.471558       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1216 20:45:11.489708       1 config.go:329] "Starting node config controller"
	I1216 20:45:11.489740       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1216 20:45:11.571696       1 shared_informer.go:320] Caches are synced for service config
	I1216 20:45:11.571772       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1216 20:45:11.591843       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [7f07a18b183616dcc709257e2aba5cd6c282be5dd7813297819de30d3a1f82c1] <==
	E1216 20:46:09.477869       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1216 20:46:12.184684       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-022944\": dial tcp 192.168.72.189:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.72.189:59804->192.168.72.189:8443: read: connection reset by peer"
	E1216 20:46:13.370704       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-022944\": dial tcp 192.168.72.189:8443: connect: connection refused"
	E1216 20:46:15.480575       1 server.go:687] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-022944\": dial tcp 192.168.72.189:8443: connect: connection refused"
	I1216 20:46:19.657722       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.72.189"]
	E1216 20:46:19.657965       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 20:46:19.698847       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1216 20:46:19.698959       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1216 20:46:19.699008       1 server_linux.go:170] "Using iptables Proxier"
	I1216 20:46:19.702023       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 20:46:19.702472       1 server.go:497] "Version info" version="v1.32.0"
	I1216 20:46:19.702520       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 20:46:19.703884       1 config.go:199] "Starting service config controller"
	I1216 20:46:19.703943       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1216 20:46:19.703978       1 config.go:105] "Starting endpoint slice config controller"
	I1216 20:46:19.703994       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1216 20:46:19.704591       1 config.go:329] "Starting node config controller"
	I1216 20:46:19.704638       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1216 20:46:19.804267       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1216 20:46:19.804296       1 shared_informer.go:320] Caches are synced for service config
	I1216 20:46:19.804870       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4d9ac0f1ddcb695e316df090a4f64ea07294b5a9c21536e57422030b7ae06a6c] <==
	I1216 20:46:09.019782       1 serving.go:386] Generated self-signed cert in-memory
	W1216 20:46:12.186293       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.72.189:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.72.189:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.72.189:59820->192.168.72.189:8443: read: connection reset by peer
	W1216 20:46:12.186364       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1216 20:46:12.186376       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1216 20:46:12.202410       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1216 20:46:12.202433       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1216 20:46:12.202457       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I1216 20:46:12.207406       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E1216 20:46:12.207538       1 server.go:266] "waiting for handlers to sync" err="context canceled"
	E1216 20:46:12.207601       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [5086843e560485814f16eebdcc52c2b92b9d690f0f13a6729b91b43b1b54f608] <==
	I1216 20:46:16.051686       1 serving.go:386] Generated self-signed cert in-memory
	I1216 20:46:18.313249       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1216 20:46:18.313308       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 20:46:18.337577       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1216 20:46:18.337633       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1216 20:46:18.337711       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 20:46:18.337753       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1216 20:46:18.337774       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1216 20:46:18.337808       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1216 20:46:18.338746       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1216 20:46:18.338829       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1216 20:46:18.438565       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I1216 20:46:18.439042       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1216 20:46:18.439846       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Dec 16 20:46:17 pause-022944 kubelet[3315]: E1216 20:46:17.382220    3315 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-022944\" not found" node="pause-022944"
	Dec 16 20:46:17 pause-022944 kubelet[3315]: E1216 20:46:17.383158    3315 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-022944\" not found" node="pause-022944"
	Dec 16 20:46:17 pause-022944 kubelet[3315]: E1216 20:46:17.383907    3315 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-022944\" not found" node="pause-022944"
	Dec 16 20:46:18 pause-022944 kubelet[3315]: I1216 20:46:18.234379    3315 apiserver.go:52] "Watching apiserver"
	Dec 16 20:46:18 pause-022944 kubelet[3315]: I1216 20:46:18.253234    3315 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-022944"
	Dec 16 20:46:18 pause-022944 kubelet[3315]: I1216 20:46:18.354840    3315 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Dec 16 20:46:18 pause-022944 kubelet[3315]: I1216 20:46:18.369219    3315 kubelet_node_status.go:125] "Node was previously registered" node="pause-022944"
	Dec 16 20:46:18 pause-022944 kubelet[3315]: I1216 20:46:18.369466    3315 kubelet_node_status.go:79] "Successfully registered node" node="pause-022944"
	Dec 16 20:46:18 pause-022944 kubelet[3315]: I1216 20:46:18.369566    3315 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 16 20:46:18 pause-022944 kubelet[3315]: I1216 20:46:18.370650    3315 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 16 20:46:18 pause-022944 kubelet[3315]: I1216 20:46:18.380537    3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/669f5a14-2fec-4984-87d0-49e760d25372-xtables-lock\") pod \"kube-proxy-lr8m7\" (UID: \"669f5a14-2fec-4984-87d0-49e760d25372\") " pod="kube-system/kube-proxy-lr8m7"
	Dec 16 20:46:18 pause-022944 kubelet[3315]: I1216 20:46:18.380621    3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/669f5a14-2fec-4984-87d0-49e760d25372-lib-modules\") pod \"kube-proxy-lr8m7\" (UID: \"669f5a14-2fec-4984-87d0-49e760d25372\") " pod="kube-system/kube-proxy-lr8m7"
	Dec 16 20:46:18 pause-022944 kubelet[3315]: I1216 20:46:18.384485    3315 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-022944"
	Dec 16 20:46:18 pause-022944 kubelet[3315]: E1216 20:46:18.415625    3315 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-022944\" already exists" pod="kube-system/kube-scheduler-pause-022944"
	Dec 16 20:46:18 pause-022944 kubelet[3315]: I1216 20:46:18.415808    3315 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-022944"
	Dec 16 20:46:18 pause-022944 kubelet[3315]: E1216 20:46:18.423590    3315 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-022944\" already exists" pod="kube-system/kube-apiserver-pause-022944"
	Dec 16 20:46:18 pause-022944 kubelet[3315]: E1216 20:46:18.435757    3315 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-pause-022944\" already exists" pod="kube-system/etcd-pause-022944"
	Dec 16 20:46:18 pause-022944 kubelet[3315]: I1216 20:46:18.435935    3315 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-022944"
	Dec 16 20:46:18 pause-022944 kubelet[3315]: E1216 20:46:18.452891    3315 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-022944\" already exists" pod="kube-system/kube-apiserver-pause-022944"
	Dec 16 20:46:18 pause-022944 kubelet[3315]: I1216 20:46:18.453015    3315 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-022944"
	Dec 16 20:46:18 pause-022944 kubelet[3315]: E1216 20:46:18.471282    3315 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-022944\" already exists" pod="kube-system/kube-controller-manager-pause-022944"
	Dec 16 20:46:24 pause-022944 kubelet[3315]: E1216 20:46:24.444024    3315 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734381984443446391,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125693,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 20:46:24 pause-022944 kubelet[3315]: E1216 20:46:24.444763    3315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734381984443446391,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125693,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 20:46:34 pause-022944 kubelet[3315]: E1216 20:46:34.448018    3315 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734381994447307922,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125693,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 20:46:34 pause-022944 kubelet[3315]: E1216 20:46:34.448408    3315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734381994447307922,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125693,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-022944 -n pause-022944
helpers_test.go:261: (dbg) Run:  kubectl --context pause-022944 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (50.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (296.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-847766 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-847766 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m55.918999175s)

                                                
                                                
-- stdout --
	* [old-k8s-version-847766] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20091
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20091-7083/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-7083/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-847766" primary control-plane node in "old-k8s-version-847766" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 20:48:41.992075   56531 out.go:345] Setting OutFile to fd 1 ...
	I1216 20:48:41.992212   56531 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:48:41.992225   56531 out.go:358] Setting ErrFile to fd 2...
	I1216 20:48:41.992232   56531 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:48:41.992552   56531 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-7083/.minikube/bin
	I1216 20:48:41.993441   56531 out.go:352] Setting JSON to false
	I1216 20:48:41.994920   56531 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5467,"bootTime":1734376655,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 20:48:41.995065   56531 start.go:139] virtualization: kvm guest
	I1216 20:48:41.997665   56531 out.go:177] * [old-k8s-version-847766] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1216 20:48:41.999347   56531 notify.go:220] Checking for updates...
	I1216 20:48:41.999397   56531 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 20:48:42.001199   56531 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 20:48:42.002803   56531 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 20:48:42.004373   56531 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-7083/.minikube
	I1216 20:48:42.005894   56531 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 20:48:42.007333   56531 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 20:48:42.009712   56531 config.go:182] Loaded profile config "cert-expiration-270954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 20:48:42.009849   56531 config.go:182] Loaded profile config "kubernetes-upgrade-560677": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 20:48:42.009998   56531 config.go:182] Loaded profile config "stopped-upgrade-976873": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1216 20:48:42.010120   56531 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 20:48:42.050061   56531 out.go:177] * Using the kvm2 driver based on user configuration
	I1216 20:48:42.051434   56531 start.go:297] selected driver: kvm2
	I1216 20:48:42.051448   56531 start.go:901] validating driver "kvm2" against <nil>
	I1216 20:48:42.051460   56531 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 20:48:42.052246   56531 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 20:48:42.052332   56531 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20091-7083/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1216 20:48:42.069188   56531 install.go:137] /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1216 20:48:42.069260   56531 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 20:48:42.069577   56531 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 20:48:42.069609   56531 cni.go:84] Creating CNI manager for ""
	I1216 20:48:42.069667   56531 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 20:48:42.069681   56531 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 20:48:42.069734   56531 start.go:340] cluster config:
	{Name:old-k8s-version-847766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-847766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 20:48:42.069857   56531 iso.go:125] acquiring lock: {Name:mk60ed2ba7ed00047edacd09f4f6bf84214f0831 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 20:48:42.072016   56531 out.go:177] * Starting "old-k8s-version-847766" primary control-plane node in "old-k8s-version-847766" cluster
	I1216 20:48:42.073638   56531 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1216 20:48:42.073694   56531 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1216 20:48:42.073704   56531 cache.go:56] Caching tarball of preloaded images
	I1216 20:48:42.073773   56531 preload.go:172] Found /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 20:48:42.073784   56531 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1216 20:48:42.073905   56531 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/config.json ...
	I1216 20:48:42.073930   56531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/config.json: {Name:mkd0337f60e9db1a361ad5bec3bef3ef58698e23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:48:42.074093   56531 start.go:360] acquireMachinesLock for old-k8s-version-847766: {Name:mk014ce1133f8d018fee1f78c9c31a354da6dd77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 20:49:06.888208   56531 start.go:364] duration metric: took 24.81406333s to acquireMachinesLock for "old-k8s-version-847766"
	I1216 20:49:06.888348   56531 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-847766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-847766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 20:49:06.888466   56531 start.go:125] createHost starting for "" (driver="kvm2")
	I1216 20:49:06.890518   56531 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1216 20:49:06.890707   56531 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:49:06.890784   56531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:49:06.910956   56531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43289
	I1216 20:49:06.911505   56531 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:49:06.912162   56531 main.go:141] libmachine: Using API Version  1
	I1216 20:49:06.912189   56531 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:49:06.912581   56531 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:49:06.912795   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetMachineName
	I1216 20:49:06.912957   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:49:06.913132   56531 start.go:159] libmachine.API.Create for "old-k8s-version-847766" (driver="kvm2")
	I1216 20:49:06.913167   56531 client.go:168] LocalClient.Create starting
	I1216 20:49:06.913241   56531 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem
	I1216 20:49:06.913301   56531 main.go:141] libmachine: Decoding PEM data...
	I1216 20:49:06.913326   56531 main.go:141] libmachine: Parsing certificate...
	I1216 20:49:06.913419   56531 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem
	I1216 20:49:06.913452   56531 main.go:141] libmachine: Decoding PEM data...
	I1216 20:49:06.913472   56531 main.go:141] libmachine: Parsing certificate...
	I1216 20:49:06.913493   56531 main.go:141] libmachine: Running pre-create checks...
	I1216 20:49:06.913512   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .PreCreateCheck
	I1216 20:49:06.913930   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetConfigRaw
	I1216 20:49:06.914332   56531 main.go:141] libmachine: Creating machine...
	I1216 20:49:06.914346   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .Create
	I1216 20:49:06.914490   56531 main.go:141] libmachine: (old-k8s-version-847766) Creating KVM machine...
	I1216 20:49:06.915657   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | found existing default KVM network
	I1216 20:49:06.917169   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:49:06.917007   56867 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:4d:64:1b} reservation:<nil>}
	I1216 20:49:06.917882   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:49:06.917798   56867 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:99:57:ff} reservation:<nil>}
	I1216 20:49:06.918801   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:49:06.918751   56867 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:26:ec:2a} reservation:<nil>}
	I1216 20:49:06.919934   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:49:06.919849   56867 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003090f0}
	I1216 20:49:06.919957   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | created network xml: 
	I1216 20:49:06.919968   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | <network>
	I1216 20:49:06.919976   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG |   <name>mk-old-k8s-version-847766</name>
	I1216 20:49:06.919986   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG |   <dns enable='no'/>
	I1216 20:49:06.919998   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG |   
	I1216 20:49:06.920013   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I1216 20:49:06.920034   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG |     <dhcp>
	I1216 20:49:06.920049   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I1216 20:49:06.920063   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG |     </dhcp>
	I1216 20:49:06.920073   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG |   </ip>
	I1216 20:49:06.920082   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG |   
	I1216 20:49:06.920093   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | </network>
	I1216 20:49:06.920106   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | 
	I1216 20:49:06.925946   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | trying to create private KVM network mk-old-k8s-version-847766 192.168.72.0/24...
	I1216 20:49:07.000617   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | private KVM network mk-old-k8s-version-847766 192.168.72.0/24 created
	I1216 20:49:07.000649   56531 main.go:141] libmachine: (old-k8s-version-847766) Setting up store path in /home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766 ...
	I1216 20:49:07.000677   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:49:07.000611   56867 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/20091-7083/.minikube
	I1216 20:49:07.000689   56531 main.go:141] libmachine: (old-k8s-version-847766) Building disk image from file:///home/jenkins/minikube-integration/20091-7083/.minikube/cache/iso/amd64/minikube-v1.34.0-1734029574-20090-amd64.iso
	I1216 20:49:07.000778   56531 main.go:141] libmachine: (old-k8s-version-847766) Downloading /home/jenkins/minikube-integration/20091-7083/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20091-7083/.minikube/cache/iso/amd64/minikube-v1.34.0-1734029574-20090-amd64.iso...
	I1216 20:49:07.246293   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:49:07.246153   56867 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa...
	I1216 20:49:07.352795   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:49:07.352675   56867 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/old-k8s-version-847766.rawdisk...
	I1216 20:49:07.352827   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | Writing magic tar header
	I1216 20:49:07.352842   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | Writing SSH key tar header
	I1216 20:49:07.352856   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:49:07.352783   56867 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766 ...
	I1216 20:49:07.352873   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766
	I1216 20:49:07.352947   56531 main.go:141] libmachine: (old-k8s-version-847766) Setting executable bit set on /home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766 (perms=drwx------)
	I1216 20:49:07.352985   56531 main.go:141] libmachine: (old-k8s-version-847766) Setting executable bit set on /home/jenkins/minikube-integration/20091-7083/.minikube/machines (perms=drwxr-xr-x)
	I1216 20:49:07.353001   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20091-7083/.minikube/machines
	I1216 20:49:07.353018   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20091-7083/.minikube
	I1216 20:49:07.353027   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20091-7083
	I1216 20:49:07.353051   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1216 20:49:07.353064   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | Checking permissions on dir: /home/jenkins
	I1216 20:49:07.353075   56531 main.go:141] libmachine: (old-k8s-version-847766) Setting executable bit set on /home/jenkins/minikube-integration/20091-7083/.minikube (perms=drwxr-xr-x)
	I1216 20:49:07.353088   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | Checking permissions on dir: /home
	I1216 20:49:07.353103   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | Skipping /home - not owner
	I1216 20:49:07.353120   56531 main.go:141] libmachine: (old-k8s-version-847766) Setting executable bit set on /home/jenkins/minikube-integration/20091-7083 (perms=drwxrwxr-x)
	I1216 20:49:07.353134   56531 main.go:141] libmachine: (old-k8s-version-847766) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1216 20:49:07.353147   56531 main.go:141] libmachine: (old-k8s-version-847766) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1216 20:49:07.353160   56531 main.go:141] libmachine: (old-k8s-version-847766) Creating domain...
	I1216 20:49:07.354181   56531 main.go:141] libmachine: (old-k8s-version-847766) define libvirt domain using xml: 
	I1216 20:49:07.354204   56531 main.go:141] libmachine: (old-k8s-version-847766) <domain type='kvm'>
	I1216 20:49:07.354220   56531 main.go:141] libmachine: (old-k8s-version-847766)   <name>old-k8s-version-847766</name>
	I1216 20:49:07.354242   56531 main.go:141] libmachine: (old-k8s-version-847766)   <memory unit='MiB'>2200</memory>
	I1216 20:49:07.354252   56531 main.go:141] libmachine: (old-k8s-version-847766)   <vcpu>2</vcpu>
	I1216 20:49:07.354263   56531 main.go:141] libmachine: (old-k8s-version-847766)   <features>
	I1216 20:49:07.354274   56531 main.go:141] libmachine: (old-k8s-version-847766)     <acpi/>
	I1216 20:49:07.354286   56531 main.go:141] libmachine: (old-k8s-version-847766)     <apic/>
	I1216 20:49:07.354291   56531 main.go:141] libmachine: (old-k8s-version-847766)     <pae/>
	I1216 20:49:07.354295   56531 main.go:141] libmachine: (old-k8s-version-847766)     
	I1216 20:49:07.354300   56531 main.go:141] libmachine: (old-k8s-version-847766)   </features>
	I1216 20:49:07.354307   56531 main.go:141] libmachine: (old-k8s-version-847766)   <cpu mode='host-passthrough'>
	I1216 20:49:07.354312   56531 main.go:141] libmachine: (old-k8s-version-847766)   
	I1216 20:49:07.354317   56531 main.go:141] libmachine: (old-k8s-version-847766)   </cpu>
	I1216 20:49:07.354327   56531 main.go:141] libmachine: (old-k8s-version-847766)   <os>
	I1216 20:49:07.354337   56531 main.go:141] libmachine: (old-k8s-version-847766)     <type>hvm</type>
	I1216 20:49:07.354349   56531 main.go:141] libmachine: (old-k8s-version-847766)     <boot dev='cdrom'/>
	I1216 20:49:07.354357   56531 main.go:141] libmachine: (old-k8s-version-847766)     <boot dev='hd'/>
	I1216 20:49:07.354368   56531 main.go:141] libmachine: (old-k8s-version-847766)     <bootmenu enable='no'/>
	I1216 20:49:07.354382   56531 main.go:141] libmachine: (old-k8s-version-847766)   </os>
	I1216 20:49:07.354390   56531 main.go:141] libmachine: (old-k8s-version-847766)   <devices>
	I1216 20:49:07.354395   56531 main.go:141] libmachine: (old-k8s-version-847766)     <disk type='file' device='cdrom'>
	I1216 20:49:07.354405   56531 main.go:141] libmachine: (old-k8s-version-847766)       <source file='/home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/boot2docker.iso'/>
	I1216 20:49:07.354410   56531 main.go:141] libmachine: (old-k8s-version-847766)       <target dev='hdc' bus='scsi'/>
	I1216 20:49:07.354420   56531 main.go:141] libmachine: (old-k8s-version-847766)       <readonly/>
	I1216 20:49:07.354426   56531 main.go:141] libmachine: (old-k8s-version-847766)     </disk>
	I1216 20:49:07.354440   56531 main.go:141] libmachine: (old-k8s-version-847766)     <disk type='file' device='disk'>
	I1216 20:49:07.354456   56531 main.go:141] libmachine: (old-k8s-version-847766)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1216 20:49:07.354472   56531 main.go:141] libmachine: (old-k8s-version-847766)       <source file='/home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/old-k8s-version-847766.rawdisk'/>
	I1216 20:49:07.354479   56531 main.go:141] libmachine: (old-k8s-version-847766)       <target dev='hda' bus='virtio'/>
	I1216 20:49:07.354486   56531 main.go:141] libmachine: (old-k8s-version-847766)     </disk>
	I1216 20:49:07.354494   56531 main.go:141] libmachine: (old-k8s-version-847766)     <interface type='network'>
	I1216 20:49:07.354500   56531 main.go:141] libmachine: (old-k8s-version-847766)       <source network='mk-old-k8s-version-847766'/>
	I1216 20:49:07.354509   56531 main.go:141] libmachine: (old-k8s-version-847766)       <model type='virtio'/>
	I1216 20:49:07.354522   56531 main.go:141] libmachine: (old-k8s-version-847766)     </interface>
	I1216 20:49:07.354537   56531 main.go:141] libmachine: (old-k8s-version-847766)     <interface type='network'>
	I1216 20:49:07.354559   56531 main.go:141] libmachine: (old-k8s-version-847766)       <source network='default'/>
	I1216 20:49:07.354569   56531 main.go:141] libmachine: (old-k8s-version-847766)       <model type='virtio'/>
	I1216 20:49:07.354575   56531 main.go:141] libmachine: (old-k8s-version-847766)     </interface>
	I1216 20:49:07.354580   56531 main.go:141] libmachine: (old-k8s-version-847766)     <serial type='pty'>
	I1216 20:49:07.354586   56531 main.go:141] libmachine: (old-k8s-version-847766)       <target port='0'/>
	I1216 20:49:07.354592   56531 main.go:141] libmachine: (old-k8s-version-847766)     </serial>
	I1216 20:49:07.354600   56531 main.go:141] libmachine: (old-k8s-version-847766)     <console type='pty'>
	I1216 20:49:07.354616   56531 main.go:141] libmachine: (old-k8s-version-847766)       <target type='serial' port='0'/>
	I1216 20:49:07.354628   56531 main.go:141] libmachine: (old-k8s-version-847766)     </console>
	I1216 20:49:07.354639   56531 main.go:141] libmachine: (old-k8s-version-847766)     <rng model='virtio'>
	I1216 20:49:07.354648   56531 main.go:141] libmachine: (old-k8s-version-847766)       <backend model='random'>/dev/random</backend>
	I1216 20:49:07.354657   56531 main.go:141] libmachine: (old-k8s-version-847766)     </rng>
	I1216 20:49:07.354665   56531 main.go:141] libmachine: (old-k8s-version-847766)     
	I1216 20:49:07.354674   56531 main.go:141] libmachine: (old-k8s-version-847766)     
	I1216 20:49:07.354679   56531 main.go:141] libmachine: (old-k8s-version-847766)   </devices>
	I1216 20:49:07.354693   56531 main.go:141] libmachine: (old-k8s-version-847766) </domain>
	I1216 20:49:07.354707   56531 main.go:141] libmachine: (old-k8s-version-847766) 
	I1216 20:49:07.359394   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:e7:ba:32 in network default
	I1216 20:49:07.359979   56531 main.go:141] libmachine: (old-k8s-version-847766) Ensuring networks are active...
	I1216 20:49:07.360000   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:07.360632   56531 main.go:141] libmachine: (old-k8s-version-847766) Ensuring network default is active
	I1216 20:49:07.361002   56531 main.go:141] libmachine: (old-k8s-version-847766) Ensuring network mk-old-k8s-version-847766 is active
	I1216 20:49:07.361644   56531 main.go:141] libmachine: (old-k8s-version-847766) Getting domain xml...
	I1216 20:49:07.362511   56531 main.go:141] libmachine: (old-k8s-version-847766) Creating domain...
	I1216 20:49:08.621121   56531 main.go:141] libmachine: (old-k8s-version-847766) Waiting to get IP...
	I1216 20:49:08.621892   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:08.622335   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:49:08.622388   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:49:08.622336   56867 retry.go:31] will retry after 226.226751ms: waiting for machine to come up
	I1216 20:49:08.849768   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:08.850240   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:49:08.850265   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:49:08.850210   56867 retry.go:31] will retry after 324.730569ms: waiting for machine to come up
	I1216 20:49:09.176862   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:09.177452   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:49:09.177476   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:49:09.177409   56867 retry.go:31] will retry after 321.708299ms: waiting for machine to come up
	I1216 20:49:09.500921   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:09.501442   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:49:09.501469   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:49:09.501382   56867 retry.go:31] will retry after 480.830696ms: waiting for machine to come up
	I1216 20:49:09.984093   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:09.984560   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:49:09.984583   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:49:09.984527   56867 retry.go:31] will retry after 522.805446ms: waiting for machine to come up
	I1216 20:49:10.509272   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:10.509750   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:49:10.509804   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:49:10.509705   56867 retry.go:31] will retry after 593.443336ms: waiting for machine to come up
	I1216 20:49:11.104390   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:11.105170   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:49:11.105202   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:49:11.105120   56867 retry.go:31] will retry after 852.201692ms: waiting for machine to come up
	I1216 20:49:11.959515   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:11.960393   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:49:11.960449   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:49:11.960346   56867 retry.go:31] will retry after 1.064028002s: waiting for machine to come up
	I1216 20:49:13.025820   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:13.026427   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:49:13.026482   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:49:13.026380   56867 retry.go:31] will retry after 1.630445443s: waiting for machine to come up
	I1216 20:49:14.659296   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:14.659750   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:49:14.659781   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:49:14.659702   56867 retry.go:31] will retry after 1.991362325s: waiting for machine to come up
	I1216 20:49:16.652359   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:16.652961   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:49:16.652991   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:49:16.652887   56867 retry.go:31] will retry after 1.971734539s: waiting for machine to come up
	I1216 20:49:18.625925   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:18.626450   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:49:18.626471   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:49:18.626407   56867 retry.go:31] will retry after 3.627882909s: waiting for machine to come up
	I1216 20:49:22.256120   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:22.256548   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:49:22.256573   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:49:22.256523   56867 retry.go:31] will retry after 2.863726326s: waiting for machine to come up
	I1216 20:49:25.121707   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:25.122339   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:49:25.122372   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:49:25.122277   56867 retry.go:31] will retry after 4.944629647s: waiting for machine to come up
	I1216 20:49:30.070529   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:30.071253   56531 main.go:141] libmachine: (old-k8s-version-847766) Found IP for machine: 192.168.72.240
	I1216 20:49:30.071311   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has current primary IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:30.071320   56531 main.go:141] libmachine: (old-k8s-version-847766) Reserving static IP address...
	I1216 20:49:30.071809   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-847766", mac: "52:54:00:c4:f2:8a", ip: "192.168.72.240"} in network mk-old-k8s-version-847766
	I1216 20:49:30.157116   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | Getting to WaitForSSH function...
	I1216 20:49:30.157156   56531 main.go:141] libmachine: (old-k8s-version-847766) Reserved static IP address: 192.168.72.240
	I1216 20:49:30.157190   56531 main.go:141] libmachine: (old-k8s-version-847766) Waiting for SSH to be available...
	I1216 20:49:30.160374   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:30.160880   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:49:23 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:49:30.160919   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:30.161202   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | Using SSH client type: external
	I1216 20:49:30.161234   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | Using SSH private key: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa (-rw-------)
	I1216 20:49:30.161266   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1216 20:49:30.161281   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | About to run SSH command:
	I1216 20:49:30.161296   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | exit 0
	I1216 20:49:30.292031   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | SSH cmd err, output: <nil>: 
	I1216 20:49:30.292329   56531 main.go:141] libmachine: (old-k8s-version-847766) KVM machine creation complete!
	I1216 20:49:30.292677   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetConfigRaw
	I1216 20:49:30.293223   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:49:30.293435   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:49:30.293608   56531 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1216 20:49:30.293625   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetState
	I1216 20:49:30.294879   56531 main.go:141] libmachine: Detecting operating system of created instance...
	I1216 20:49:30.294897   56531 main.go:141] libmachine: Waiting for SSH to be available...
	I1216 20:49:30.294904   56531 main.go:141] libmachine: Getting to WaitForSSH function...
	I1216 20:49:30.294912   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:49:30.297649   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:30.298048   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:49:23 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:49:30.298082   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:30.298190   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:49:30.298400   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:49:30.298542   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:49:30.298681   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:49:30.298856   56531 main.go:141] libmachine: Using SSH client type: native
	I1216 20:49:30.299044   56531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I1216 20:49:30.299055   56531 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1216 20:49:30.414806   56531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 20:49:30.414829   56531 main.go:141] libmachine: Detecting the provisioner...
	I1216 20:49:30.414837   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:49:30.417985   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:30.418343   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:49:23 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:49:30.418380   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:30.418554   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:49:30.418787   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:49:30.419001   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:49:30.419220   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:49:30.419439   56531 main.go:141] libmachine: Using SSH client type: native
	I1216 20:49:30.419662   56531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I1216 20:49:30.419677   56531 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1216 20:49:30.532486   56531 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1216 20:49:30.532602   56531 main.go:141] libmachine: found compatible host: buildroot
	I1216 20:49:30.532618   56531 main.go:141] libmachine: Provisioning with buildroot...
	I1216 20:49:30.532635   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetMachineName
	I1216 20:49:30.532944   56531 buildroot.go:166] provisioning hostname "old-k8s-version-847766"
	I1216 20:49:30.532965   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetMachineName
	I1216 20:49:30.533184   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:49:30.535663   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:30.535995   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:49:23 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:49:30.536016   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:30.536175   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:49:30.536385   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:49:30.536544   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:49:30.536669   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:49:30.536823   56531 main.go:141] libmachine: Using SSH client type: native
	I1216 20:49:30.537007   56531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I1216 20:49:30.537029   56531 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-847766 && echo "old-k8s-version-847766" | sudo tee /etc/hostname
	I1216 20:49:30.664728   56531 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-847766
	
	I1216 20:49:30.664766   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:49:30.667814   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:30.668216   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:49:23 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:49:30.668246   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:30.668441   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:49:30.668643   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:49:30.668837   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:49:30.668994   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:49:30.669184   56531 main.go:141] libmachine: Using SSH client type: native
	I1216 20:49:30.669376   56531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I1216 20:49:30.669393   56531 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-847766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-847766/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-847766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 20:49:30.794470   56531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 20:49:30.794510   56531 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20091-7083/.minikube CaCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20091-7083/.minikube}
	I1216 20:49:30.794573   56531 buildroot.go:174] setting up certificates
	I1216 20:49:30.794585   56531 provision.go:84] configureAuth start
	I1216 20:49:30.794601   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetMachineName
	I1216 20:49:30.794916   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetIP
	I1216 20:49:30.797955   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:30.798427   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:49:23 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:49:30.798464   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:30.798673   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:49:30.801149   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:30.801535   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:49:23 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:49:30.801562   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:30.801743   56531 provision.go:143] copyHostCerts
	I1216 20:49:30.801806   56531 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem, removing ...
	I1216 20:49:30.801819   56531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem
	I1216 20:49:30.801883   56531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem (1123 bytes)
	I1216 20:49:30.802032   56531 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem, removing ...
	I1216 20:49:30.802042   56531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem
	I1216 20:49:30.802069   56531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem (1679 bytes)
	I1216 20:49:30.802142   56531 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem, removing ...
	I1216 20:49:30.802152   56531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem
	I1216 20:49:30.802175   56531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem (1082 bytes)
	I1216 20:49:30.802236   56531 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-847766 san=[127.0.0.1 192.168.72.240 localhost minikube old-k8s-version-847766]
	I1216 20:49:31.228980   56531 provision.go:177] copyRemoteCerts
	I1216 20:49:31.229054   56531 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 20:49:31.229076   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:49:31.232415   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:31.232827   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:49:23 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:49:31.232862   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:31.233200   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:49:31.233473   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:49:31.233680   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:49:31.233864   56531 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa Username:docker}
	I1216 20:49:31.322485   56531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 20:49:31.354171   56531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1216 20:49:31.387444   56531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 20:49:31.416157   56531 provision.go:87] duration metric: took 621.536959ms to configureAuth
	I1216 20:49:31.416197   56531 buildroot.go:189] setting minikube options for container-runtime
	I1216 20:49:31.416410   56531 config.go:182] Loaded profile config "old-k8s-version-847766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1216 20:49:31.416519   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:49:31.419643   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:31.420124   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:49:23 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:49:31.420159   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:31.420359   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:49:31.420579   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:49:31.420756   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:49:31.420897   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:49:31.421112   56531 main.go:141] libmachine: Using SSH client type: native
	I1216 20:49:31.421312   56531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I1216 20:49:31.421329   56531 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 20:49:31.667226   56531 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 20:49:31.667281   56531 main.go:141] libmachine: Checking connection to Docker...
	I1216 20:49:31.667292   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetURL
	I1216 20:49:31.668781   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | Using libvirt version 6000000
	I1216 20:49:31.671072   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:31.671458   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:49:23 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:49:31.671501   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:31.671689   56531 main.go:141] libmachine: Docker is up and running!
	I1216 20:49:31.671716   56531 main.go:141] libmachine: Reticulating splines...
	I1216 20:49:31.671724   56531 client.go:171] duration metric: took 24.758545738s to LocalClient.Create
	I1216 20:49:31.671751   56531 start.go:167] duration metric: took 24.758627557s to libmachine.API.Create "old-k8s-version-847766"
	I1216 20:49:31.671765   56531 start.go:293] postStartSetup for "old-k8s-version-847766" (driver="kvm2")
	I1216 20:49:31.671779   56531 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 20:49:31.671803   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:49:31.672095   56531 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 20:49:31.672119   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:49:31.674300   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:31.674618   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:49:23 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:49:31.674653   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:31.674762   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:49:31.674932   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:49:31.675094   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:49:31.675262   56531 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa Username:docker}
	I1216 20:49:31.762996   56531 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 20:49:31.767798   56531 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 20:49:31.767829   56531 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/addons for local assets ...
	I1216 20:49:31.767903   56531 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/files for local assets ...
	I1216 20:49:31.768025   56531 filesync.go:149] local asset: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem -> 142542.pem in /etc/ssl/certs
	I1216 20:49:31.768158   56531 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 20:49:31.779402   56531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /etc/ssl/certs/142542.pem (1708 bytes)
	I1216 20:49:31.806127   56531 start.go:296] duration metric: took 134.344822ms for postStartSetup
	I1216 20:49:31.806190   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetConfigRaw
	I1216 20:49:31.806828   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetIP
	I1216 20:49:31.809823   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:31.810273   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:49:23 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:49:31.810298   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:31.810604   56531 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/config.json ...
	I1216 20:49:31.810818   56531 start.go:128] duration metric: took 24.922339536s to createHost
	I1216 20:49:31.810844   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:49:31.813281   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:31.813715   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:49:23 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:49:31.813748   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:31.814016   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:49:31.814258   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:49:31.814477   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:49:31.814658   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:49:31.814907   56531 main.go:141] libmachine: Using SSH client type: native
	I1216 20:49:31.815136   56531 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I1216 20:49:31.815158   56531 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 20:49:31.940756   56531 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734382171.895455278
	
	I1216 20:49:31.940789   56531 fix.go:216] guest clock: 1734382171.895455278
	I1216 20:49:31.940802   56531 fix.go:229] Guest: 2024-12-16 20:49:31.895455278 +0000 UTC Remote: 2024-12-16 20:49:31.810832048 +0000 UTC m=+49.859631765 (delta=84.62323ms)
	I1216 20:49:31.940865   56531 fix.go:200] guest clock delta is within tolerance: 84.62323ms
	I1216 20:49:31.940877   56531 start.go:83] releasing machines lock for "old-k8s-version-847766", held for 25.052605458s
	I1216 20:49:31.940912   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:49:31.941194   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetIP
	I1216 20:49:31.944387   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:31.944805   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:49:23 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:49:31.944844   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:31.945085   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:49:31.945648   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:49:31.945876   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:49:31.945964   56531 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 20:49:31.946048   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:49:31.946153   56531 ssh_runner.go:195] Run: cat /version.json
	I1216 20:49:31.946209   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:49:31.949465   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:31.949949   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:49:23 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:49:31.949988   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:31.950324   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:49:31.950574   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:49:31.950733   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:31.950774   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:49:23 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:49:31.950806   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:31.950856   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:49:31.951016   56531 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa Username:docker}
	I1216 20:49:31.951355   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:49:31.951561   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:49:31.951753   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:49:31.951981   56531 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa Username:docker}
	I1216 20:49:32.064629   56531 ssh_runner.go:195] Run: systemctl --version
	I1216 20:49:32.071675   56531 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 20:49:32.240567   56531 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 20:49:32.247987   56531 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 20:49:32.248053   56531 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 20:49:32.267477   56531 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 20:49:32.267505   56531 start.go:495] detecting cgroup driver to use...
	I1216 20:49:32.267577   56531 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 20:49:32.292307   56531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 20:49:32.308242   56531 docker.go:217] disabling cri-docker service (if available) ...
	I1216 20:49:32.308320   56531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 20:49:32.324207   56531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 20:49:32.340574   56531 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 20:49:32.485270   56531 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 20:49:32.671882   56531 docker.go:233] disabling docker service ...
	I1216 20:49:32.671953   56531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 20:49:32.689705   56531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 20:49:32.705597   56531 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 20:49:32.867340   56531 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 20:49:33.054306   56531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 20:49:33.072305   56531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 20:49:33.096159   56531 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1216 20:49:33.096218   56531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:49:33.112165   56531 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 20:49:33.112230   56531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:49:33.124419   56531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:49:33.139874   56531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:49:33.156255   56531 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 20:49:33.171439   56531 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 20:49:33.183572   56531 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 20:49:33.183646   56531 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 20:49:33.206172   56531 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 20:49:33.222327   56531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:49:33.366560   56531 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 20:49:33.474097   56531 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 20:49:33.474177   56531 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 20:49:33.479649   56531 start.go:563] Will wait 60s for crictl version
	I1216 20:49:33.479725   56531 ssh_runner.go:195] Run: which crictl
	I1216 20:49:33.484026   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 20:49:33.545486   56531 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 20:49:33.545579   56531 ssh_runner.go:195] Run: crio --version
	I1216 20:49:33.580082   56531 ssh_runner.go:195] Run: crio --version
	I1216 20:49:33.620437   56531 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1216 20:49:33.621929   56531 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetIP
	I1216 20:49:33.625124   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:33.625529   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:49:23 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:49:33.625562   56531 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:49:33.625792   56531 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1216 20:49:33.631857   56531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 20:49:33.645867   56531 kubeadm.go:883] updating cluster {Name:old-k8s-version-847766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-847766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 20:49:33.646014   56531 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1216 20:49:33.646084   56531 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 20:49:33.688285   56531 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1216 20:49:33.688374   56531 ssh_runner.go:195] Run: which lz4
	I1216 20:49:33.693991   56531 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 20:49:33.698661   56531 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 20:49:33.698698   56531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1216 20:49:35.539198   56531 crio.go:462] duration metric: took 1.845242896s to copy over tarball
	I1216 20:49:35.539322   56531 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 20:49:38.373242   56531 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.833883979s)
	I1216 20:49:38.373283   56531 crio.go:469] duration metric: took 2.834048362s to extract the tarball
	I1216 20:49:38.373294   56531 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 20:49:38.421281   56531 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 20:49:38.592770   56531 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1216 20:49:38.592804   56531 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1216 20:49:38.592856   56531 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:49:38.592907   56531 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1216 20:49:38.592935   56531 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1216 20:49:38.592972   56531 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1216 20:49:38.593155   56531 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1216 20:49:38.593165   56531 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1216 20:49:38.593170   56531 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 20:49:38.593244   56531 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1216 20:49:38.594547   56531 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1216 20:49:38.594582   56531 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1216 20:49:38.594607   56531 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1216 20:49:38.594613   56531 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1216 20:49:38.594633   56531 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 20:49:38.594644   56531 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1216 20:49:38.594698   56531 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:49:38.594710   56531 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1216 20:49:38.784643   56531 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1216 20:49:38.817492   56531 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1216 20:49:38.820204   56531 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1216 20:49:38.823847   56531 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1216 20:49:38.836163   56531 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1216 20:49:38.836212   56531 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1216 20:49:38.836249   56531 ssh_runner.go:195] Run: which crictl
	I1216 20:49:38.848477   56531 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1216 20:49:38.901816   56531 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 20:49:38.913492   56531 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1216 20:49:38.943280   56531 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1216 20:49:38.943337   56531 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1216 20:49:38.943397   56531 ssh_runner.go:195] Run: which crictl
	I1216 20:49:38.943418   56531 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1216 20:49:38.943458   56531 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1216 20:49:38.943523   56531 ssh_runner.go:195] Run: which crictl
	I1216 20:49:38.951348   56531 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1216 20:49:38.951385   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1216 20:49:38.951397   56531 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1216 20:49:38.951435   56531 ssh_runner.go:195] Run: which crictl
	I1216 20:49:38.995047   56531 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1216 20:49:38.995114   56531 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1216 20:49:38.995167   56531 ssh_runner.go:195] Run: which crictl
	I1216 20:49:39.011688   56531 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:49:39.036321   56531 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1216 20:49:39.036362   56531 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 20:49:39.036407   56531 ssh_runner.go:195] Run: which crictl
	I1216 20:49:39.036412   56531 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1216 20:49:39.036452   56531 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1216 20:49:39.036497   56531 ssh_runner.go:195] Run: which crictl
	I1216 20:49:39.036525   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1216 20:49:39.036557   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1216 20:49:39.036588   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1216 20:49:39.061033   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1216 20:49:39.061081   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1216 20:49:39.268715   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 20:49:39.268775   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1216 20:49:39.268822   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1216 20:49:39.268888   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1216 20:49:39.268909   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1216 20:49:39.268976   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1216 20:49:39.269003   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1216 20:49:39.428104   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1216 20:49:39.428152   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1216 20:49:39.428200   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1216 20:49:39.428228   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 20:49:39.428256   56531 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1216 20:49:39.428331   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1216 20:49:39.428483   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1216 20:49:39.542834   56531 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1216 20:49:39.553237   56531 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1216 20:49:39.554709   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 20:49:39.554729   56531 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1216 20:49:39.560655   56531 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1216 20:49:39.560697   56531 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1216 20:49:39.610889   56531 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1216 20:49:39.612694   56531 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1216 20:49:39.612747   56531 cache_images.go:92] duration metric: took 1.019930851s to LoadCachedImages
	W1216 20:49:39.612825   56531 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I1216 20:49:39.612841   56531 kubeadm.go:934] updating node { 192.168.72.240 8443 v1.20.0 crio true true} ...
	I1216 20:49:39.612983   56531 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-847766 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-847766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 20:49:39.613062   56531 ssh_runner.go:195] Run: crio config
	I1216 20:49:39.665201   56531 cni.go:84] Creating CNI manager for ""
	I1216 20:49:39.665232   56531 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 20:49:39.665244   56531 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 20:49:39.665265   56531 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.240 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-847766 NodeName:old-k8s-version-847766 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1216 20:49:39.665408   56531 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-847766"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.240
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.240"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 20:49:39.665465   56531 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1216 20:49:39.675824   56531 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 20:49:39.675887   56531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 20:49:39.686107   56531 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1216 20:49:39.703746   56531 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 20:49:39.720998   56531 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1216 20:49:39.741871   56531 ssh_runner.go:195] Run: grep 192.168.72.240	control-plane.minikube.internal$ /etc/hosts
	I1216 20:49:39.747296   56531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.240	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 20:49:39.762495   56531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:49:39.896834   56531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 20:49:39.915118   56531 certs.go:68] Setting up /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766 for IP: 192.168.72.240
	I1216 20:49:39.915142   56531 certs.go:194] generating shared ca certs ...
	I1216 20:49:39.915163   56531 certs.go:226] acquiring lock for ca certs: {Name:mk7f8f83a04be3d39897a025f51d4d8228b5a509 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:49:39.915358   56531 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key
	I1216 20:49:39.915399   56531 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key
	I1216 20:49:39.915406   56531 certs.go:256] generating profile certs ...
	I1216 20:49:39.915473   56531 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/client.key
	I1216 20:49:39.915502   56531 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/client.crt with IP's: []
	I1216 20:49:39.987915   56531 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/client.crt ...
	I1216 20:49:39.987951   56531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/client.crt: {Name:mk1b3cb29709881f505e20ebed154122396e997a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:49:39.988147   56531 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/client.key ...
	I1216 20:49:39.988167   56531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/client.key: {Name:mke44337bded7eaafc49a0d47a7b3425df020e7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:49:39.988279   56531 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.key.6c8704df
	I1216 20:49:39.988300   56531 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.crt.6c8704df with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.240]
	I1216 20:49:40.295443   56531 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.crt.6c8704df ...
	I1216 20:49:40.295484   56531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.crt.6c8704df: {Name:mk232487c862ec228ef6989676c146335af7baf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:49:40.295680   56531 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.key.6c8704df ...
	I1216 20:49:40.295698   56531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.key.6c8704df: {Name:mkf65e89456c49009688dd53057c1791551574d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:49:40.295799   56531 certs.go:381] copying /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.crt.6c8704df -> /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.crt
	I1216 20:49:40.295893   56531 certs.go:385] copying /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.key.6c8704df -> /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.key
	I1216 20:49:40.295970   56531 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/proxy-client.key
	I1216 20:49:40.295995   56531 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/proxy-client.crt with IP's: []
	I1216 20:49:40.404276   56531 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/proxy-client.crt ...
	I1216 20:49:40.404308   56531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/proxy-client.crt: {Name:mk77caa74d193d4defbb3235785fc2b444b7a7b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:49:40.442338   56531 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/proxy-client.key ...
	I1216 20:49:40.442381   56531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/proxy-client.key: {Name:mk97e44419ef6855e928d6392f9fd28fb9baf09c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:49:40.442606   56531 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem (1338 bytes)
	W1216 20:49:40.442668   56531 certs.go:480] ignoring /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254_empty.pem, impossibly tiny 0 bytes
	I1216 20:49:40.442685   56531 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 20:49:40.442726   56531 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem (1082 bytes)
	I1216 20:49:40.442760   56531 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem (1123 bytes)
	I1216 20:49:40.442798   56531 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem (1679 bytes)
	I1216 20:49:40.442854   56531 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem (1708 bytes)
	I1216 20:49:40.443492   56531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 20:49:40.472119   56531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 20:49:40.497991   56531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 20:49:40.523528   56531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 20:49:40.548863   56531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1216 20:49:40.582718   56531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 20:49:40.608389   56531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 20:49:40.634836   56531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 20:49:40.661381   56531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 20:49:40.687619   56531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem --> /usr/share/ca-certificates/14254.pem (1338 bytes)
	I1216 20:49:40.713124   56531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /usr/share/ca-certificates/142542.pem (1708 bytes)
	I1216 20:49:40.739606   56531 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 20:49:40.760923   56531 ssh_runner.go:195] Run: openssl version
	I1216 20:49:40.776674   56531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142542.pem && ln -fs /usr/share/ca-certificates/142542.pem /etc/ssl/certs/142542.pem"
	I1216 20:49:40.795440   56531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142542.pem
	I1216 20:49:40.804634   56531 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 19:42 /usr/share/ca-certificates/142542.pem
	I1216 20:49:40.804711   56531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142542.pem
	I1216 20:49:40.816878   56531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142542.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 20:49:40.830987   56531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 20:49:40.843838   56531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:49:40.848859   56531 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:49:40.848933   56531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:49:40.857201   56531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 20:49:40.869306   56531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14254.pem && ln -fs /usr/share/ca-certificates/14254.pem /etc/ssl/certs/14254.pem"
	I1216 20:49:40.881995   56531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14254.pem
	I1216 20:49:40.888201   56531 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 19:42 /usr/share/ca-certificates/14254.pem
	I1216 20:49:40.888286   56531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14254.pem
	I1216 20:49:40.895874   56531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14254.pem /etc/ssl/certs/51391683.0"
	I1216 20:49:40.912742   56531 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 20:49:40.917878   56531 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 20:49:40.917944   56531 kubeadm.go:392] StartCluster: {Name:old-k8s-version-847766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-847766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 20:49:40.918036   56531 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 20:49:40.918093   56531 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 20:49:40.963021   56531 cri.go:89] found id: ""
	I1216 20:49:40.963104   56531 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 20:49:40.974677   56531 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 20:49:40.985702   56531 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 20:49:40.996723   56531 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 20:49:40.996744   56531 kubeadm.go:157] found existing configuration files:
	
	I1216 20:49:40.996789   56531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 20:49:41.007593   56531 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 20:49:41.007664   56531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 20:49:41.018265   56531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 20:49:41.028526   56531 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 20:49:41.028591   56531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 20:49:41.038611   56531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 20:49:41.050726   56531 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 20:49:41.050806   56531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 20:49:41.060798   56531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 20:49:41.070487   56531 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 20:49:41.070557   56531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 20:49:41.081942   56531 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 20:49:41.378999   56531 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 20:51:38.834144   56531 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1216 20:51:38.834302   56531 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1216 20:51:38.835839   56531 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1216 20:51:38.835891   56531 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 20:51:38.835979   56531 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 20:51:38.836168   56531 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 20:51:38.836301   56531 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1216 20:51:38.836422   56531 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 20:51:38.837911   56531 out.go:235]   - Generating certificates and keys ...
	I1216 20:51:38.838021   56531 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 20:51:38.838105   56531 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 20:51:38.838208   56531 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 20:51:38.838284   56531 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1216 20:51:38.838370   56531 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1216 20:51:38.838436   56531 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1216 20:51:38.838508   56531 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1216 20:51:38.838675   56531 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-847766] and IPs [192.168.72.240 127.0.0.1 ::1]
	I1216 20:51:38.838746   56531 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1216 20:51:38.838916   56531 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-847766] and IPs [192.168.72.240 127.0.0.1 ::1]
	I1216 20:51:38.839004   56531 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 20:51:38.839089   56531 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 20:51:38.839157   56531 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1216 20:51:38.839232   56531 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 20:51:38.839319   56531 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 20:51:38.839401   56531 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 20:51:38.839488   56531 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 20:51:38.839560   56531 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 20:51:38.839674   56531 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 20:51:38.839796   56531 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 20:51:38.839865   56531 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 20:51:38.839954   56531 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 20:51:38.841525   56531 out.go:235]   - Booting up control plane ...
	I1216 20:51:38.841634   56531 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 20:51:38.841741   56531 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 20:51:38.841829   56531 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 20:51:38.841930   56531 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 20:51:38.842124   56531 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1216 20:51:38.842182   56531 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1216 20:51:38.842237   56531 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 20:51:38.842415   56531 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 20:51:38.842525   56531 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 20:51:38.842709   56531 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 20:51:38.842784   56531 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 20:51:38.842938   56531 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 20:51:38.843012   56531 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 20:51:38.843179   56531 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 20:51:38.843304   56531 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 20:51:38.843513   56531 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 20:51:38.843526   56531 kubeadm.go:310] 
	I1216 20:51:38.843558   56531 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1216 20:51:38.843594   56531 kubeadm.go:310] 		timed out waiting for the condition
	I1216 20:51:38.843614   56531 kubeadm.go:310] 
	I1216 20:51:38.843655   56531 kubeadm.go:310] 	This error is likely caused by:
	I1216 20:51:38.843691   56531 kubeadm.go:310] 		- The kubelet is not running
	I1216 20:51:38.843778   56531 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 20:51:38.843786   56531 kubeadm.go:310] 
	I1216 20:51:38.843891   56531 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 20:51:38.843957   56531 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1216 20:51:38.844011   56531 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1216 20:51:38.844021   56531 kubeadm.go:310] 
	I1216 20:51:38.844168   56531 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1216 20:51:38.844270   56531 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1216 20:51:38.844278   56531 kubeadm.go:310] 
	I1216 20:51:38.844360   56531 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1216 20:51:38.844479   56531 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1216 20:51:38.844615   56531 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1216 20:51:38.844715   56531 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1216 20:51:38.844743   56531 kubeadm.go:310] 
	W1216 20:51:38.844858   56531 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-847766] and IPs [192.168.72.240 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-847766] and IPs [192.168.72.240 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-847766] and IPs [192.168.72.240 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-847766] and IPs [192.168.72.240 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 20:51:38.844892   56531 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 20:51:40.574085   56531 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.729157865s)
	I1216 20:51:40.574202   56531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 20:51:40.590747   56531 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 20:51:40.601309   56531 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 20:51:40.601333   56531 kubeadm.go:157] found existing configuration files:
	
	I1216 20:51:40.601375   56531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 20:51:40.612319   56531 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 20:51:40.612385   56531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 20:51:40.622605   56531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 20:51:40.632190   56531 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 20:51:40.632260   56531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 20:51:40.643354   56531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 20:51:40.653552   56531 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 20:51:40.653633   56531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 20:51:40.666051   56531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 20:51:40.675867   56531 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 20:51:40.675944   56531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 20:51:40.686413   56531 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 20:51:40.914165   56531 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 20:53:37.217618   56531 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1216 20:53:37.217710   56531 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1216 20:53:37.219349   56531 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1216 20:53:37.219411   56531 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 20:53:37.219504   56531 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 20:53:37.219624   56531 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 20:53:37.219735   56531 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1216 20:53:37.219797   56531 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 20:53:37.221817   56531 out.go:235]   - Generating certificates and keys ...
	I1216 20:53:37.221887   56531 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 20:53:37.221944   56531 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 20:53:37.222016   56531 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 20:53:37.222070   56531 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 20:53:37.222134   56531 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 20:53:37.222180   56531 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 20:53:37.222237   56531 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 20:53:37.222304   56531 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 20:53:37.222372   56531 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 20:53:37.222439   56531 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 20:53:37.222473   56531 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 20:53:37.222525   56531 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 20:53:37.222568   56531 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 20:53:37.222673   56531 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 20:53:37.222769   56531 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 20:53:37.222830   56531 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 20:53:37.222933   56531 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 20:53:37.223008   56531 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 20:53:37.223053   56531 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 20:53:37.223119   56531 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 20:53:37.224931   56531 out.go:235]   - Booting up control plane ...
	I1216 20:53:37.225040   56531 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 20:53:37.225129   56531 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 20:53:37.225196   56531 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 20:53:37.225275   56531 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 20:53:37.225436   56531 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1216 20:53:37.225487   56531 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1216 20:53:37.225548   56531 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 20:53:37.225718   56531 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 20:53:37.225783   56531 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 20:53:37.225962   56531 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 20:53:37.226040   56531 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 20:53:37.226262   56531 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 20:53:37.226364   56531 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 20:53:37.226526   56531 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 20:53:37.226586   56531 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 20:53:37.226747   56531 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 20:53:37.226755   56531 kubeadm.go:310] 
	I1216 20:53:37.226788   56531 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1216 20:53:37.226836   56531 kubeadm.go:310] 		timed out waiting for the condition
	I1216 20:53:37.226843   56531 kubeadm.go:310] 
	I1216 20:53:37.226876   56531 kubeadm.go:310] 	This error is likely caused by:
	I1216 20:53:37.226910   56531 kubeadm.go:310] 		- The kubelet is not running
	I1216 20:53:37.227002   56531 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 20:53:37.227008   56531 kubeadm.go:310] 
	I1216 20:53:37.227110   56531 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 20:53:37.227144   56531 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1216 20:53:37.227179   56531 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1216 20:53:37.227185   56531 kubeadm.go:310] 
	I1216 20:53:37.227335   56531 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1216 20:53:37.227448   56531 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1216 20:53:37.227456   56531 kubeadm.go:310] 
	I1216 20:53:37.227544   56531 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1216 20:53:37.227651   56531 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1216 20:53:37.227726   56531 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1216 20:53:37.227792   56531 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1216 20:53:37.227854   56531 kubeadm.go:394] duration metric: took 3m56.309916205s to StartCluster
	I1216 20:53:37.227887   56531 kubeadm.go:310] 
	I1216 20:53:37.227911   56531 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 20:53:37.227966   56531 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 20:53:37.273056   56531 cri.go:89] found id: ""
	I1216 20:53:37.273091   56531 logs.go:282] 0 containers: []
	W1216 20:53:37.273100   56531 logs.go:284] No container was found matching "kube-apiserver"
	I1216 20:53:37.273106   56531 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 20:53:37.273157   56531 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 20:53:37.309150   56531 cri.go:89] found id: ""
	I1216 20:53:37.309187   56531 logs.go:282] 0 containers: []
	W1216 20:53:37.309200   56531 logs.go:284] No container was found matching "etcd"
	I1216 20:53:37.309209   56531 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 20:53:37.309278   56531 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 20:53:37.346549   56531 cri.go:89] found id: ""
	I1216 20:53:37.346580   56531 logs.go:282] 0 containers: []
	W1216 20:53:37.346589   56531 logs.go:284] No container was found matching "coredns"
	I1216 20:53:37.346597   56531 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 20:53:37.346663   56531 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 20:53:37.383727   56531 cri.go:89] found id: ""
	I1216 20:53:37.383756   56531 logs.go:282] 0 containers: []
	W1216 20:53:37.383766   56531 logs.go:284] No container was found matching "kube-scheduler"
	I1216 20:53:37.383775   56531 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 20:53:37.383839   56531 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 20:53:37.422335   56531 cri.go:89] found id: ""
	I1216 20:53:37.422368   56531 logs.go:282] 0 containers: []
	W1216 20:53:37.422376   56531 logs.go:284] No container was found matching "kube-proxy"
	I1216 20:53:37.422381   56531 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 20:53:37.422445   56531 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 20:53:37.463896   56531 cri.go:89] found id: ""
	I1216 20:53:37.463925   56531 logs.go:282] 0 containers: []
	W1216 20:53:37.463933   56531 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 20:53:37.463938   56531 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 20:53:37.463985   56531 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 20:53:37.501187   56531 cri.go:89] found id: ""
	I1216 20:53:37.501220   56531 logs.go:282] 0 containers: []
	W1216 20:53:37.501231   56531 logs.go:284] No container was found matching "kindnet"
	I1216 20:53:37.501244   56531 logs.go:123] Gathering logs for kubelet ...
	I1216 20:53:37.501261   56531 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 20:53:37.550941   56531 logs.go:123] Gathering logs for dmesg ...
	I1216 20:53:37.550975   56531 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 20:53:37.565963   56531 logs.go:123] Gathering logs for describe nodes ...
	I1216 20:53:37.565993   56531 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 20:53:37.700370   56531 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 20:53:37.700397   56531 logs.go:123] Gathering logs for CRI-O ...
	I1216 20:53:37.700410   56531 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 20:53:37.809100   56531 logs.go:123] Gathering logs for container status ...
	I1216 20:53:37.809148   56531 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1216 20:53:37.853441   56531 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1216 20:53:37.853507   56531 out.go:270] * 
	* 
	W1216 20:53:37.853561   56531 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 20:53:37.853577   56531 out.go:270] * 
	* 
	W1216 20:53:37.854491   56531 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 20:53:37.857839   56531 out.go:201] 
	W1216 20:53:37.859270   56531 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 20:53:37.859309   56531 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 20:53:37.859333   56531 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 20:53:37.860843   56531 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-847766 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-847766 -n old-k8s-version-847766
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-847766 -n old-k8s-version-847766: exit status 6 (236.775363ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 20:53:38.136584   59804 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-847766" does not appear in /home/jenkins/minikube-integration/20091-7083/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-847766" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (296.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-606219 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-606219 --alsologtostderr -v=3: exit status 82 (2m0.664143434s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-606219"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 20:51:48.039040   58738 out.go:345] Setting OutFile to fd 1 ...
	I1216 20:51:48.039363   58738 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:51:48.039376   58738 out.go:358] Setting ErrFile to fd 2...
	I1216 20:51:48.039383   58738 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:51:48.039626   58738 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-7083/.minikube/bin
	I1216 20:51:48.039908   58738 out.go:352] Setting JSON to false
	I1216 20:51:48.040010   58738 mustload.go:65] Loading cluster: embed-certs-606219
	I1216 20:51:48.040375   58738 config.go:182] Loaded profile config "embed-certs-606219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 20:51:48.040456   58738 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/config.json ...
	I1216 20:51:48.040668   58738 mustload.go:65] Loading cluster: embed-certs-606219
	I1216 20:51:48.040797   58738 config.go:182] Loaded profile config "embed-certs-606219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 20:51:48.040833   58738 stop.go:39] StopHost: embed-certs-606219
	I1216 20:51:48.041253   58738 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:51:48.041315   58738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:51:48.057467   58738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34135
	I1216 20:51:48.058160   58738 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:51:48.058870   58738 main.go:141] libmachine: Using API Version  1
	I1216 20:51:48.058895   58738 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:51:48.059593   58738 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:51:48.065061   58738 out.go:177] * Stopping node "embed-certs-606219"  ...
	I1216 20:51:48.066556   58738 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1216 20:51:48.066620   58738 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 20:51:48.066929   58738 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1216 20:51:48.066957   58738 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 20:51:48.070099   58738 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 20:51:48.070637   58738 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 21:50:53 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 20:51:48.070668   58738 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 20:51:48.070905   58738 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 20:51:48.071155   58738 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 20:51:48.071441   58738 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 20:51:48.071604   58738 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 20:51:48.183370   58738 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1216 20:51:48.243205   58738 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1216 20:51:48.306330   58738 main.go:141] libmachine: Stopping "embed-certs-606219"...
	I1216 20:51:48.306367   58738 main.go:141] libmachine: (embed-certs-606219) Calling .GetState
	I1216 20:51:48.308066   58738 main.go:141] libmachine: (embed-certs-606219) Calling .Stop
	I1216 20:51:48.311771   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 0/120
	I1216 20:51:49.314042   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 1/120
	I1216 20:51:50.315306   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 2/120
	I1216 20:51:51.430224   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 3/120
	I1216 20:51:52.431883   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 4/120
	I1216 20:51:53.433799   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 5/120
	I1216 20:51:54.435470   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 6/120
	I1216 20:51:55.437931   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 7/120
	I1216 20:51:56.440762   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 8/120
	I1216 20:51:57.442412   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 9/120
	I1216 20:51:58.444068   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 10/120
	I1216 20:51:59.446101   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 11/120
	I1216 20:52:00.447628   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 12/120
	I1216 20:52:01.449670   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 13/120
	I1216 20:52:02.451575   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 14/120
	I1216 20:52:03.453717   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 15/120
	I1216 20:52:04.455991   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 16/120
	I1216 20:52:05.457993   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 17/120
	I1216 20:52:06.459629   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 18/120
	I1216 20:52:07.462331   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 19/120
	I1216 20:52:08.464810   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 20/120
	I1216 20:52:09.466467   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 21/120
	I1216 20:52:10.468185   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 22/120
	I1216 20:52:11.470522   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 23/120
	I1216 20:52:12.471940   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 24/120
	I1216 20:52:13.474212   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 25/120
	I1216 20:52:14.475716   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 26/120
	I1216 20:52:15.477322   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 27/120
	I1216 20:52:16.479479   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 28/120
	I1216 20:52:17.481905   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 29/120
	I1216 20:52:18.484204   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 30/120
	I1216 20:52:19.485738   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 31/120
	I1216 20:52:20.488324   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 32/120
	I1216 20:52:21.490196   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 33/120
	I1216 20:52:22.492009   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 34/120
	I1216 20:52:23.493756   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 35/120
	I1216 20:52:24.495310   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 36/120
	I1216 20:52:25.496755   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 37/120
	I1216 20:52:26.498628   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 38/120
	I1216 20:52:27.500245   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 39/120
	I1216 20:52:28.501892   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 40/120
	I1216 20:52:29.504668   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 41/120
	I1216 20:52:30.506424   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 42/120
	I1216 20:52:31.508114   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 43/120
	I1216 20:52:32.509668   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 44/120
	I1216 20:52:33.512039   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 45/120
	I1216 20:52:34.513668   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 46/120
	I1216 20:52:35.515040   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 47/120
	I1216 20:52:36.516543   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 48/120
	I1216 20:52:37.518097   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 49/120
	I1216 20:52:38.520398   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 50/120
	I1216 20:52:39.522029   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 51/120
	I1216 20:52:40.523618   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 52/120
	I1216 20:52:41.525864   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 53/120
	I1216 20:52:42.527393   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 54/120
	I1216 20:52:43.529549   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 55/120
	I1216 20:52:44.531047   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 56/120
	I1216 20:52:45.532385   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 57/120
	I1216 20:52:46.534104   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 58/120
	I1216 20:52:47.535314   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 59/120
	I1216 20:52:48.537408   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 60/120
	I1216 20:52:49.539030   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 61/120
	I1216 20:52:50.540270   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 62/120
	I1216 20:52:51.541767   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 63/120
	I1216 20:52:52.543182   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 64/120
	I1216 20:52:53.545528   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 65/120
	I1216 20:52:54.547013   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 66/120
	I1216 20:52:55.548753   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 67/120
	I1216 20:52:56.550299   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 68/120
	I1216 20:52:57.551929   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 69/120
	I1216 20:52:58.553857   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 70/120
	I1216 20:52:59.555169   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 71/120
	I1216 20:53:00.556722   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 72/120
	I1216 20:53:01.558249   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 73/120
	I1216 20:53:02.560077   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 74/120
	I1216 20:53:03.562378   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 75/120
	I1216 20:53:04.564002   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 76/120
	I1216 20:53:05.565488   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 77/120
	I1216 20:53:06.567217   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 78/120
	I1216 20:53:07.568932   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 79/120
	I1216 20:53:08.571546   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 80/120
	I1216 20:53:09.573332   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 81/120
	I1216 20:53:10.574934   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 82/120
	I1216 20:53:11.576801   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 83/120
	I1216 20:53:12.578625   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 84/120
	I1216 20:53:13.581016   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 85/120
	I1216 20:53:14.582753   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 86/120
	I1216 20:53:15.584412   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 87/120
	I1216 20:53:16.586085   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 88/120
	I1216 20:53:17.587946   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 89/120
	I1216 20:53:18.589667   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 90/120
	I1216 20:53:19.591868   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 91/120
	I1216 20:53:20.593242   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 92/120
	I1216 20:53:21.594614   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 93/120
	I1216 20:53:22.596399   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 94/120
	I1216 20:53:23.598888   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 95/120
	I1216 20:53:24.600325   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 96/120
	I1216 20:53:25.601951   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 97/120
	I1216 20:53:26.603670   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 98/120
	I1216 20:53:27.605360   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 99/120
	I1216 20:53:28.606882   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 100/120
	I1216 20:53:29.608148   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 101/120
	I1216 20:53:30.609727   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 102/120
	I1216 20:53:31.611161   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 103/120
	I1216 20:53:32.612841   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 104/120
	I1216 20:53:33.615093   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 105/120
	I1216 20:53:34.616716   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 106/120
	I1216 20:53:35.618663   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 107/120
	I1216 20:53:36.620296   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 108/120
	I1216 20:53:37.621896   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 109/120
	I1216 20:53:38.624165   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 110/120
	I1216 20:53:39.625832   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 111/120
	I1216 20:53:40.627830   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 112/120
	I1216 20:53:41.630143   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 113/120
	I1216 20:53:42.631720   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 114/120
	I1216 20:53:43.633825   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 115/120
	I1216 20:53:44.635326   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 116/120
	I1216 20:53:45.636808   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 117/120
	I1216 20:53:46.638016   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 118/120
	I1216 20:53:47.639626   58738 main.go:141] libmachine: (embed-certs-606219) Waiting for machine to stop 119/120
	I1216 20:53:48.641096   58738 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1216 20:53:48.641183   58738 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1216 20:53:48.643482   58738 out.go:201] 
	W1216 20:53:48.645119   58738 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1216 20:53:48.645138   58738 out.go:270] * 
	* 
	W1216 20:53:48.647818   58738 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 20:53:48.649306   58738 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:228: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-606219 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-606219 -n embed-certs-606219
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-606219 -n embed-certs-606219: exit status 3 (18.617237238s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 20:54:07.267585   59973 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.151:22: connect: no route to host
	E1216 20:54:07.267615   59973 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.151:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-606219" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-232338 --alsologtostderr -v=3
E1216 20:52:13.884567   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-232338 --alsologtostderr -v=3: exit status 82 (2m0.564540169s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-232338"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 20:52:11.347881   59266 out.go:345] Setting OutFile to fd 1 ...
	I1216 20:52:11.347985   59266 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:52:11.347994   59266 out.go:358] Setting ErrFile to fd 2...
	I1216 20:52:11.347998   59266 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:52:11.348224   59266 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-7083/.minikube/bin
	I1216 20:52:11.348464   59266 out.go:352] Setting JSON to false
	I1216 20:52:11.348539   59266 mustload.go:65] Loading cluster: no-preload-232338
	I1216 20:52:11.348888   59266 config.go:182] Loaded profile config "no-preload-232338": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 20:52:11.348957   59266 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/config.json ...
	I1216 20:52:11.349136   59266 mustload.go:65] Loading cluster: no-preload-232338
	I1216 20:52:11.349236   59266 config.go:182] Loaded profile config "no-preload-232338": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 20:52:11.349258   59266 stop.go:39] StopHost: no-preload-232338
	I1216 20:52:11.349601   59266 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:52:11.349651   59266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:52:11.365601   59266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39627
	I1216 20:52:11.366121   59266 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:52:11.366717   59266 main.go:141] libmachine: Using API Version  1
	I1216 20:52:11.366742   59266 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:52:11.367115   59266 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:52:11.370012   59266 out.go:177] * Stopping node "no-preload-232338"  ...
	I1216 20:52:11.371745   59266 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1216 20:52:11.371799   59266 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:52:11.372118   59266 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1216 20:52:11.372151   59266 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:52:11.375258   59266 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:52:11.375733   59266 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:52:11.375772   59266 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:52:11.375956   59266 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:52:11.376240   59266 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:52:11.376464   59266 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:52:11.376632   59266 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 20:52:11.502744   59266 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1216 20:52:11.576825   59266 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1216 20:52:11.644484   59266 main.go:141] libmachine: Stopping "no-preload-232338"...
	I1216 20:52:11.644534   59266 main.go:141] libmachine: (no-preload-232338) Calling .GetState
	I1216 20:52:11.646065   59266 main.go:141] libmachine: (no-preload-232338) Calling .Stop
	I1216 20:52:11.649441   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 0/120
	I1216 20:52:12.650821   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 1/120
	I1216 20:52:13.652138   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 2/120
	I1216 20:52:14.653630   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 3/120
	I1216 20:52:15.655101   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 4/120
	I1216 20:52:16.656731   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 5/120
	I1216 20:52:17.658028   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 6/120
	I1216 20:52:18.660159   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 7/120
	I1216 20:52:19.662023   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 8/120
	I1216 20:52:20.663612   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 9/120
	I1216 20:52:21.665722   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 10/120
	I1216 20:52:22.667918   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 11/120
	I1216 20:52:23.669538   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 12/120
	I1216 20:52:24.670861   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 13/120
	I1216 20:52:25.672538   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 14/120
	I1216 20:52:26.674384   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 15/120
	I1216 20:52:27.675893   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 16/120
	I1216 20:52:28.677520   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 17/120
	I1216 20:52:29.679040   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 18/120
	I1216 20:52:30.680510   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 19/120
	I1216 20:52:31.682764   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 20/120
	I1216 20:52:32.684293   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 21/120
	I1216 20:52:33.685772   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 22/120
	I1216 20:52:34.687066   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 23/120
	I1216 20:52:35.688402   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 24/120
	I1216 20:52:36.690586   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 25/120
	I1216 20:52:37.692122   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 26/120
	I1216 20:52:38.693735   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 27/120
	I1216 20:52:39.695332   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 28/120
	I1216 20:52:40.696653   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 29/120
	I1216 20:52:41.698015   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 30/120
	I1216 20:52:42.699312   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 31/120
	I1216 20:52:43.700716   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 32/120
	I1216 20:52:44.702061   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 33/120
	I1216 20:52:45.703509   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 34/120
	I1216 20:52:46.705860   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 35/120
	I1216 20:52:47.707087   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 36/120
	I1216 20:52:48.708860   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 37/120
	I1216 20:52:49.710334   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 38/120
	I1216 20:52:50.711773   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 39/120
	I1216 20:52:51.714039   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 40/120
	I1216 20:52:52.715569   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 41/120
	I1216 20:52:53.717774   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 42/120
	I1216 20:52:54.719587   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 43/120
	I1216 20:52:55.721901   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 44/120
	I1216 20:52:56.724258   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 45/120
	I1216 20:52:57.725653   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 46/120
	I1216 20:52:58.727375   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 47/120
	I1216 20:52:59.728619   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 48/120
	I1216 20:53:00.730334   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 49/120
	I1216 20:53:01.731930   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 50/120
	I1216 20:53:02.733783   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 51/120
	I1216 20:53:03.735524   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 52/120
	I1216 20:53:04.737260   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 53/120
	I1216 20:53:05.739091   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 54/120
	I1216 20:53:06.741776   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 55/120
	I1216 20:53:07.743383   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 56/120
	I1216 20:53:08.745066   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 57/120
	I1216 20:53:09.746698   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 58/120
	I1216 20:53:10.748297   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 59/120
	I1216 20:53:11.750774   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 60/120
	I1216 20:53:12.752477   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 61/120
	I1216 20:53:13.753988   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 62/120
	I1216 20:53:14.755493   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 63/120
	I1216 20:53:15.757209   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 64/120
	I1216 20:53:16.759853   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 65/120
	I1216 20:53:17.761564   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 66/120
	I1216 20:53:18.763217   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 67/120
	I1216 20:53:19.765199   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 68/120
	I1216 20:53:20.766644   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 69/120
	I1216 20:53:21.767991   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 70/120
	I1216 20:53:22.769935   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 71/120
	I1216 20:53:23.771684   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 72/120
	I1216 20:53:24.773516   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 73/120
	I1216 20:53:25.775263   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 74/120
	I1216 20:53:26.777699   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 75/120
	I1216 20:53:27.779319   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 76/120
	I1216 20:53:28.780770   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 77/120
	I1216 20:53:29.782341   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 78/120
	I1216 20:53:30.783980   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 79/120
	I1216 20:53:31.786458   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 80/120
	I1216 20:53:32.788008   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 81/120
	I1216 20:53:33.789463   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 82/120
	I1216 20:53:34.791076   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 83/120
	I1216 20:53:35.792843   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 84/120
	I1216 20:53:36.794998   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 85/120
	I1216 20:53:37.796472   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 86/120
	I1216 20:53:38.798013   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 87/120
	I1216 20:53:39.799532   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 88/120
	I1216 20:53:40.801366   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 89/120
	I1216 20:53:41.802778   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 90/120
	I1216 20:53:42.804413   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 91/120
	I1216 20:53:43.805787   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 92/120
	I1216 20:53:44.807423   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 93/120
	I1216 20:53:45.809078   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 94/120
	I1216 20:53:46.811067   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 95/120
	I1216 20:53:47.812876   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 96/120
	I1216 20:53:48.814589   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 97/120
	I1216 20:53:49.816371   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 98/120
	I1216 20:53:50.817962   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 99/120
	I1216 20:53:51.820267   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 100/120
	I1216 20:53:52.821795   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 101/120
	I1216 20:53:53.823365   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 102/120
	I1216 20:53:54.824758   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 103/120
	I1216 20:53:55.826311   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 104/120
	I1216 20:53:56.828727   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 105/120
	I1216 20:53:57.830356   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 106/120
	I1216 20:53:58.832021   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 107/120
	I1216 20:53:59.833533   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 108/120
	I1216 20:54:00.835270   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 109/120
	I1216 20:54:01.837550   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 110/120
	I1216 20:54:02.839476   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 111/120
	I1216 20:54:03.840912   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 112/120
	I1216 20:54:04.842684   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 113/120
	I1216 20:54:05.844364   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 114/120
	I1216 20:54:06.846506   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 115/120
	I1216 20:54:07.848180   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 116/120
	I1216 20:54:08.849958   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 117/120
	I1216 20:54:09.851409   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 118/120
	I1216 20:54:10.852835   59266 main.go:141] libmachine: (no-preload-232338) Waiting for machine to stop 119/120
	I1216 20:54:11.854293   59266 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1216 20:54:11.854353   59266 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1216 20:54:11.856514   59266 out.go:201] 
	W1216 20:54:11.857960   59266 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1216 20:54:11.857987   59266 out.go:270] * 
	* 
	W1216 20:54:11.860790   59266 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 20:54:11.862216   59266 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:228: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-232338 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-232338 -n no-preload-232338
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-232338 -n no-preload-232338: exit status 3 (18.443863485s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 20:54:30.307651   60137 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.240:22: connect: no route to host
	E1216 20:54:30.307681   60137 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.240:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-232338" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (138.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-327790 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-327790 --alsologtostderr -v=3: exit status 82 (2m0.521133282s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-327790"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 20:52:58.981345   59630 out.go:345] Setting OutFile to fd 1 ...
	I1216 20:52:58.981507   59630 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:52:58.981519   59630 out.go:358] Setting ErrFile to fd 2...
	I1216 20:52:58.981524   59630 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:52:58.981721   59630 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-7083/.minikube/bin
	I1216 20:52:58.981956   59630 out.go:352] Setting JSON to false
	I1216 20:52:58.982031   59630 mustload.go:65] Loading cluster: default-k8s-diff-port-327790
	I1216 20:52:58.982400   59630 config.go:182] Loaded profile config "default-k8s-diff-port-327790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 20:52:58.982469   59630 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/config.json ...
	I1216 20:52:58.982681   59630 mustload.go:65] Loading cluster: default-k8s-diff-port-327790
	I1216 20:52:58.982799   59630 config.go:182] Loaded profile config "default-k8s-diff-port-327790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 20:52:58.982825   59630 stop.go:39] StopHost: default-k8s-diff-port-327790
	I1216 20:52:58.983194   59630 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:52:58.983264   59630 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:52:58.998627   59630 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36761
	I1216 20:52:58.999136   59630 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:52:58.999747   59630 main.go:141] libmachine: Using API Version  1
	I1216 20:52:58.999776   59630 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:52:59.000137   59630 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:52:59.002495   59630 out.go:177] * Stopping node "default-k8s-diff-port-327790"  ...
	I1216 20:52:59.003897   59630 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1216 20:52:59.003943   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:52:59.004233   59630 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1216 20:52:59.004261   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:52:59.007345   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:52:59.007777   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:52:07 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:52:59.007805   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:52:59.007964   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:52:59.008184   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:52:59.008327   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:52:59.008499   59630 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 20:52:59.101921   59630 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1216 20:52:59.158524   59630 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1216 20:52:59.222633   59630 main.go:141] libmachine: Stopping "default-k8s-diff-port-327790"...
	I1216 20:52:59.222662   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetState
	I1216 20:52:59.224351   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Stop
	I1216 20:52:59.228522   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 0/120
	I1216 20:53:00.230172   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 1/120
	I1216 20:53:01.231778   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 2/120
	I1216 20:53:02.233255   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 3/120
	I1216 20:53:03.234729   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 4/120
	I1216 20:53:04.236952   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 5/120
	I1216 20:53:05.238637   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 6/120
	I1216 20:53:06.240200   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 7/120
	I1216 20:53:07.241754   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 8/120
	I1216 20:53:08.243219   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 9/120
	I1216 20:53:09.245161   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 10/120
	I1216 20:53:10.246868   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 11/120
	I1216 20:53:11.248908   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 12/120
	I1216 20:53:12.251365   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 13/120
	I1216 20:53:13.252879   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 14/120
	I1216 20:53:14.255204   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 15/120
	I1216 20:53:15.256862   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 16/120
	I1216 20:53:16.258466   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 17/120
	I1216 20:53:17.260088   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 18/120
	I1216 20:53:18.261541   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 19/120
	I1216 20:53:19.263273   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 20/120
	I1216 20:53:20.264971   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 21/120
	I1216 20:53:21.266983   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 22/120
	I1216 20:53:22.268596   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 23/120
	I1216 20:53:23.270347   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 24/120
	I1216 20:53:24.272620   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 25/120
	I1216 20:53:25.274434   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 26/120
	I1216 20:53:26.276221   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 27/120
	I1216 20:53:27.277879   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 28/120
	I1216 20:53:28.279519   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 29/120
	I1216 20:53:29.282125   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 30/120
	I1216 20:53:30.283640   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 31/120
	I1216 20:53:31.285189   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 32/120
	I1216 20:53:32.286930   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 33/120
	I1216 20:53:33.288342   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 34/120
	I1216 20:53:34.290724   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 35/120
	I1216 20:53:35.292133   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 36/120
	I1216 20:53:36.293685   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 37/120
	I1216 20:53:37.295499   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 38/120
	I1216 20:53:38.297501   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 39/120
	I1216 20:53:39.299838   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 40/120
	I1216 20:53:40.301994   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 41/120
	I1216 20:53:41.303636   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 42/120
	I1216 20:53:42.306071   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 43/120
	I1216 20:53:43.307659   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 44/120
	I1216 20:53:44.309860   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 45/120
	I1216 20:53:45.312032   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 46/120
	I1216 20:53:46.313517   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 47/120
	I1216 20:53:47.315540   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 48/120
	I1216 20:53:48.317867   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 49/120
	I1216 20:53:49.320647   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 50/120
	I1216 20:53:50.322095   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 51/120
	I1216 20:53:51.323705   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 52/120
	I1216 20:53:52.325206   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 53/120
	I1216 20:53:53.326781   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 54/120
	I1216 20:53:54.329026   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 55/120
	I1216 20:53:55.330604   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 56/120
	I1216 20:53:56.332422   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 57/120
	I1216 20:53:57.334085   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 58/120
	I1216 20:53:58.335586   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 59/120
	I1216 20:53:59.338121   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 60/120
	I1216 20:54:00.340052   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 61/120
	I1216 20:54:01.341596   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 62/120
	I1216 20:54:02.343317   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 63/120
	I1216 20:54:03.345061   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 64/120
	I1216 20:54:04.347353   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 65/120
	I1216 20:54:05.349072   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 66/120
	I1216 20:54:06.350647   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 67/120
	I1216 20:54:07.352222   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 68/120
	I1216 20:54:08.353871   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 69/120
	I1216 20:54:09.355880   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 70/120
	I1216 20:54:10.357950   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 71/120
	I1216 20:54:11.359607   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 72/120
	I1216 20:54:12.361931   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 73/120
	I1216 20:54:13.363723   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 74/120
	I1216 20:54:14.366159   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 75/120
	I1216 20:54:15.367937   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 76/120
	I1216 20:54:16.369809   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 77/120
	I1216 20:54:17.371415   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 78/120
	I1216 20:54:18.372939   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 79/120
	I1216 20:54:19.375645   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 80/120
	I1216 20:54:20.377391   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 81/120
	I1216 20:54:21.379551   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 82/120
	I1216 20:54:22.381541   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 83/120
	I1216 20:54:23.383486   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 84/120
	I1216 20:54:24.385903   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 85/120
	I1216 20:54:25.387711   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 86/120
	I1216 20:54:26.389352   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 87/120
	I1216 20:54:27.390989   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 88/120
	I1216 20:54:28.392513   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 89/120
	I1216 20:54:29.394596   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 90/120
	I1216 20:54:30.396092   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 91/120
	I1216 20:54:31.397989   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 92/120
	I1216 20:54:32.399443   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 93/120
	I1216 20:54:33.400995   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 94/120
	I1216 20:54:34.403512   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 95/120
	I1216 20:54:35.405535   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 96/120
	I1216 20:54:36.407054   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 97/120
	I1216 20:54:37.408638   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 98/120
	I1216 20:54:38.410224   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 99/120
	I1216 20:54:39.412010   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 100/120
	I1216 20:54:40.414003   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 101/120
	I1216 20:54:41.415905   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 102/120
	I1216 20:54:42.417621   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 103/120
	I1216 20:54:43.419208   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 104/120
	I1216 20:54:44.421230   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 105/120
	I1216 20:54:45.423236   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 106/120
	I1216 20:54:46.425099   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 107/120
	I1216 20:54:47.426646   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 108/120
	I1216 20:54:48.428411   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 109/120
	I1216 20:54:49.430452   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 110/120
	I1216 20:54:50.432117   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 111/120
	I1216 20:54:51.433702   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 112/120
	I1216 20:54:52.435647   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 113/120
	I1216 20:54:53.437322   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 114/120
	I1216 20:54:54.439695   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 115/120
	I1216 20:54:55.441520   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 116/120
	I1216 20:54:56.443058   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 117/120
	I1216 20:54:57.444798   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 118/120
	I1216 20:54:58.446362   59630 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for machine to stop 119/120
	I1216 20:54:59.447087   59630 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1216 20:54:59.447146   59630 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1216 20:54:59.449320   59630 out.go:201] 
	W1216 20:54:59.450974   59630 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1216 20:54:59.450998   59630 out.go:270] * 
	* 
	W1216 20:54:59.453686   59630 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 20:54:59.455092   59630 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:228: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-327790 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-327790 -n default-k8s-diff-port-327790
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-327790 -n default-k8s-diff-port-327790: exit status 3 (18.466875383s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 20:55:17.923596   60524 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host
	E1216 20:55:17.923618   60524 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-327790" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (138.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-847766 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context old-k8s-version-847766 create -f testdata/busybox.yaml: exit status 1 (42.025571ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-847766" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context old-k8s-version-847766 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-847766 -n old-k8s-version-847766
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-847766 -n old-k8s-version-847766: exit status 6 (246.347294ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 20:53:38.407995   59845 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-847766" does not appear in /home/jenkins/minikube-integration/20091-7083/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-847766" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-847766 -n old-k8s-version-847766
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-847766 -n old-k8s-version-847766: exit status 6 (237.520821ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 20:53:38.664320   59876 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-847766" does not appear in /home/jenkins/minikube-integration/20091-7083/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-847766" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (109.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-847766 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-847766 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m48.890705341s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-847766 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-847766 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-847766 describe deploy/metrics-server -n kube-system: exit status 1 (44.364635ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-847766" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-847766 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-847766 -n old-k8s-version-847766
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-847766 -n old-k8s-version-847766: exit status 6 (227.792314ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 20:55:27.825193   60762 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-847766" does not appear in /home/jenkins/minikube-integration/20091-7083/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-847766" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (109.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-606219 -n embed-certs-606219
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-606219 -n embed-certs-606219: exit status 3 (3.167561763s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 20:54:10.435654   60070 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.151:22: connect: no route to host
	E1216 20:54:10.435682   60070 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.151:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:237: status error: exit status 3 (may be ok)
start_stop_delete_test.go:239: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-606219 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-606219 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153634619s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.151:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:246: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-606219 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-606219 -n embed-certs-606219
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-606219 -n embed-certs-606219: exit status 3 (3.062249198s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 20:54:19.651659   60184 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.151:22: connect: no route to host
	E1216 20:54:19.651681   60184 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.151:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-606219" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-232338 -n no-preload-232338
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-232338 -n no-preload-232338: exit status 3 (3.16775035s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 20:54:33.475659   60306 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.240:22: connect: no route to host
	E1216 20:54:33.475689   60306 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.240:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:237: status error: exit status 3 (may be ok)
start_stop_delete_test.go:239: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-232338 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-232338 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153339927s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.240:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:246: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-232338 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-232338 -n no-preload-232338
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-232338 -n no-preload-232338: exit status 3 (3.062430683s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 20:54:42.691598   60374 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.240:22: connect: no route to host
	E1216 20:54:42.691624   60374 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.240:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-232338" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-327790 -n default-k8s-diff-port-327790
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-327790 -n default-k8s-diff-port-327790: exit status 3 (3.167556621s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 20:55:21.091628   60639 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host
	E1216 20:55:21.091649   60639 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:237: status error: exit status 3 (may be ok)
start_stop_delete_test.go:239: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-327790 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-327790 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.155367158s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:246: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-327790 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-327790 -n default-k8s-diff-port-327790
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-327790 -n default-k8s-diff-port-327790: exit status 3 (3.060277355s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 20:55:30.307619   60721 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host
	E1216 20:55:30.307639   60721 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-327790" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (753.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-847766 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E1216 20:55:50.481416   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/functional-782219/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:57:13.884284   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: no such file or directory" logger="UnhandledError"
E1216 21:00:50.482155   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/functional-782219/client.crt: no such file or directory" logger="UnhandledError"
E1216 21:02:13.554499   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/functional-782219/client.crt: no such file or directory" logger="UnhandledError"
E1216 21:02:13.884212   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-847766 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m30.090197983s)

                                                
                                                
-- stdout --
	* [old-k8s-version-847766] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20091
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20091-7083/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-7083/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.0
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-847766" primary control-plane node in "old-k8s-version-847766" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-847766" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 20:55:34.390724   60933 out.go:345] Setting OutFile to fd 1 ...
	I1216 20:55:34.390973   60933 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:55:34.390982   60933 out.go:358] Setting ErrFile to fd 2...
	I1216 20:55:34.390986   60933 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:55:34.391166   60933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-7083/.minikube/bin
	I1216 20:55:34.391763   60933 out.go:352] Setting JSON to false
	I1216 20:55:34.392611   60933 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5879,"bootTime":1734376655,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 20:55:34.392675   60933 start.go:139] virtualization: kvm guest
	I1216 20:55:34.394822   60933 out.go:177] * [old-k8s-version-847766] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1216 20:55:34.396184   60933 notify.go:220] Checking for updates...
	I1216 20:55:34.396201   60933 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 20:55:34.397724   60933 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 20:55:34.399130   60933 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 20:55:34.400470   60933 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-7083/.minikube
	I1216 20:55:34.401934   60933 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 20:55:34.403341   60933 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 20:55:34.405179   60933 config.go:182] Loaded profile config "old-k8s-version-847766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1216 20:55:34.405571   60933 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:55:34.405650   60933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:55:34.421052   60933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41215
	I1216 20:55:34.421523   60933 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:55:34.422018   60933 main.go:141] libmachine: Using API Version  1
	I1216 20:55:34.422035   60933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:55:34.422373   60933 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:55:34.422646   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:55:34.424565   60933 out.go:177] * Kubernetes 1.32.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.0
	I1216 20:55:34.426088   60933 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 20:55:34.426419   60933 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:55:34.426474   60933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:55:34.441375   60933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36915
	I1216 20:55:34.441833   60933 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:55:34.442327   60933 main.go:141] libmachine: Using API Version  1
	I1216 20:55:34.442349   60933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:55:34.442658   60933 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:55:34.442852   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:55:34.480512   60933 out.go:177] * Using the kvm2 driver based on existing profile
	I1216 20:55:34.481972   60933 start.go:297] selected driver: kvm2
	I1216 20:55:34.481988   60933 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-847766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-847766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 20:55:34.482125   60933 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 20:55:34.482826   60933 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 20:55:34.482907   60933 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20091-7083/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1216 20:55:34.498561   60933 install.go:137] /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1216 20:55:34.498953   60933 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 20:55:34.498981   60933 cni.go:84] Creating CNI manager for ""
	I1216 20:55:34.499022   60933 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 20:55:34.499060   60933 start.go:340] cluster config:
	{Name:old-k8s-version-847766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-847766 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 20:55:34.499164   60933 iso.go:125] acquiring lock: {Name:mk60ed2ba7ed00047edacd09f4f6bf84214f0831 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 20:55:34.501128   60933 out.go:177] * Starting "old-k8s-version-847766" primary control-plane node in "old-k8s-version-847766" cluster
	I1216 20:55:34.502579   60933 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1216 20:55:34.502609   60933 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1216 20:55:34.502615   60933 cache.go:56] Caching tarball of preloaded images
	I1216 20:55:34.502716   60933 preload.go:172] Found /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 20:55:34.502731   60933 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1216 20:55:34.502823   60933 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/config.json ...
	I1216 20:55:34.503011   60933 start.go:360] acquireMachinesLock for old-k8s-version-847766: {Name:mk014ce1133f8d018fee1f78c9c31a354da6dd77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 20:59:37.140408   60933 start.go:364] duration metric: took 4m2.637362394s to acquireMachinesLock for "old-k8s-version-847766"
	I1216 20:59:37.140483   60933 start.go:96] Skipping create...Using existing machine configuration
	I1216 20:59:37.140491   60933 fix.go:54] fixHost starting: 
	I1216 20:59:37.140933   60933 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:59:37.140988   60933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:59:37.159075   60933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39873
	I1216 20:59:37.159574   60933 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:59:37.160140   60933 main.go:141] libmachine: Using API Version  1
	I1216 20:59:37.160172   60933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:59:37.160560   60933 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:59:37.160773   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:37.160889   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetState
	I1216 20:59:37.162561   60933 fix.go:112] recreateIfNeeded on old-k8s-version-847766: state=Stopped err=<nil>
	I1216 20:59:37.162603   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	W1216 20:59:37.162755   60933 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 20:59:37.166031   60933 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-847766" ...
	I1216 20:59:37.167470   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .Start
	I1216 20:59:37.167715   60933 main.go:141] libmachine: (old-k8s-version-847766) Ensuring networks are active...
	I1216 20:59:37.168626   60933 main.go:141] libmachine: (old-k8s-version-847766) Ensuring network default is active
	I1216 20:59:37.169114   60933 main.go:141] libmachine: (old-k8s-version-847766) Ensuring network mk-old-k8s-version-847766 is active
	I1216 20:59:37.169670   60933 main.go:141] libmachine: (old-k8s-version-847766) Getting domain xml...
	I1216 20:59:37.170345   60933 main.go:141] libmachine: (old-k8s-version-847766) Creating domain...
	I1216 20:59:38.535579   60933 main.go:141] libmachine: (old-k8s-version-847766) Waiting to get IP...
	I1216 20:59:38.536661   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:38.537089   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:38.537174   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:38.537078   61973 retry.go:31] will retry after 277.62307ms: waiting for machine to come up
	I1216 20:59:38.816788   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:38.817329   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:38.817360   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:38.817272   61973 retry.go:31] will retry after 346.694382ms: waiting for machine to come up
	I1216 20:59:39.165778   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:39.166377   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:39.166436   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:39.166355   61973 retry.go:31] will retry after 416.599295ms: waiting for machine to come up
	I1216 20:59:39.585259   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:39.585762   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:39.585791   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:39.585737   61973 retry.go:31] will retry after 526.969594ms: waiting for machine to come up
	I1216 20:59:40.114653   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:40.115175   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:40.115205   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:40.115140   61973 retry.go:31] will retry after 502.283372ms: waiting for machine to come up
	I1216 20:59:40.619067   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:40.619633   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:40.619682   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:40.619571   61973 retry.go:31] will retry after 764.799982ms: waiting for machine to come up
	I1216 20:59:41.385515   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:41.386066   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:41.386100   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:41.386027   61973 retry.go:31] will retry after 982.237202ms: waiting for machine to come up
	I1216 20:59:42.369934   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:42.370414   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:42.370449   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:42.370373   61973 retry.go:31] will retry after 1.163280736s: waiting for machine to come up
	I1216 20:59:43.534829   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:43.535194   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:43.535224   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:43.535143   61973 retry.go:31] will retry after 1.630958514s: waiting for machine to come up
	I1216 20:59:45.168144   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:45.168719   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:45.168750   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:45.168671   61973 retry.go:31] will retry after 1.835631107s: waiting for machine to come up
	I1216 20:59:47.005854   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:47.006380   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:47.006422   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:47.006339   61973 retry.go:31] will retry after 1.943800898s: waiting for machine to come up
	I1216 20:59:48.951552   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:48.952050   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:48.952114   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:48.952008   61973 retry.go:31] will retry after 2.949898251s: waiting for machine to come up
	I1216 20:59:51.905108   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:51.905560   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:51.905594   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:51.905505   61973 retry.go:31] will retry after 3.44069953s: waiting for machine to come up
	I1216 20:59:55.349557   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.350105   60933 main.go:141] libmachine: (old-k8s-version-847766) Found IP for machine: 192.168.72.240
	I1216 20:59:55.350129   60933 main.go:141] libmachine: (old-k8s-version-847766) Reserving static IP address...
	I1216 20:59:55.350140   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has current primary IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.350574   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "old-k8s-version-847766", mac: "52:54:00:c4:f2:8a", ip: "192.168.72.240"} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.350608   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | skip adding static IP to network mk-old-k8s-version-847766 - found existing host DHCP lease matching {name: "old-k8s-version-847766", mac: "52:54:00:c4:f2:8a", ip: "192.168.72.240"}
	I1216 20:59:55.350623   60933 main.go:141] libmachine: (old-k8s-version-847766) Reserved static IP address: 192.168.72.240
	I1216 20:59:55.350646   60933 main.go:141] libmachine: (old-k8s-version-847766) Waiting for SSH to be available...
	I1216 20:59:55.350662   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | Getting to WaitForSSH function...
	I1216 20:59:55.353011   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.353346   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.353369   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.353535   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | Using SSH client type: external
	I1216 20:59:55.353560   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | Using SSH private key: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa (-rw-------)
	I1216 20:59:55.353592   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1216 20:59:55.353606   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | About to run SSH command:
	I1216 20:59:55.353621   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | exit 0
	I1216 20:59:55.480726   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | SSH cmd err, output: <nil>: 
	I1216 20:59:55.481062   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetConfigRaw
	I1216 20:59:55.481692   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetIP
	I1216 20:59:55.484113   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.484500   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.484537   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.484769   60933 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/config.json ...
	I1216 20:59:55.484985   60933 machine.go:93] provisionDockerMachine start ...
	I1216 20:59:55.485008   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:55.485220   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:55.487511   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.487835   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.487862   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.487958   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:55.488134   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:55.488268   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:55.488405   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:55.488546   60933 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:55.488725   60933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I1216 20:59:55.488735   60933 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 20:59:55.596092   60933 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1216 20:59:55.596127   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetMachineName
	I1216 20:59:55.596401   60933 buildroot.go:166] provisioning hostname "old-k8s-version-847766"
	I1216 20:59:55.596426   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetMachineName
	I1216 20:59:55.596644   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:55.599286   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.599631   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.599662   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.599814   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:55.600010   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:55.600166   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:55.600299   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:55.600462   60933 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:55.600665   60933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I1216 20:59:55.600678   60933 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-847766 && echo "old-k8s-version-847766" | sudo tee /etc/hostname
	I1216 20:59:55.731851   60933 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-847766
	
	I1216 20:59:55.731879   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:55.734802   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.735155   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.735186   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.735422   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:55.735650   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:55.735815   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:55.736030   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:55.736194   60933 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:55.736377   60933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I1216 20:59:55.736393   60933 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-847766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-847766/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-847766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 20:59:55.857050   60933 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 20:59:55.857108   60933 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20091-7083/.minikube CaCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20091-7083/.minikube}
	I1216 20:59:55.857138   60933 buildroot.go:174] setting up certificates
	I1216 20:59:55.857163   60933 provision.go:84] configureAuth start
	I1216 20:59:55.857180   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetMachineName
	I1216 20:59:55.857505   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetIP
	I1216 20:59:55.860286   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.860613   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.860643   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.860826   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:55.863292   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.863682   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.863709   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.863871   60933 provision.go:143] copyHostCerts
	I1216 20:59:55.863920   60933 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem, removing ...
	I1216 20:59:55.863929   60933 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem
	I1216 20:59:55.863986   60933 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem (1679 bytes)
	I1216 20:59:55.864069   60933 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem, removing ...
	I1216 20:59:55.864077   60933 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem
	I1216 20:59:55.864104   60933 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem (1082 bytes)
	I1216 20:59:55.864159   60933 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem, removing ...
	I1216 20:59:55.864177   60933 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem
	I1216 20:59:55.864202   60933 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem (1123 bytes)
	I1216 20:59:55.864250   60933 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-847766 san=[127.0.0.1 192.168.72.240 localhost minikube old-k8s-version-847766]
	I1216 20:59:56.058548   60933 provision.go:177] copyRemoteCerts
	I1216 20:59:56.058603   60933 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 20:59:56.058638   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:56.061354   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.061666   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.061707   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.061838   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:56.062039   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.062200   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:56.062356   60933 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa Username:docker}
	I1216 20:59:56.146788   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 20:59:56.172789   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1216 20:59:56.198040   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 20:59:56.222476   60933 provision.go:87] duration metric: took 365.299433ms to configureAuth
	I1216 20:59:56.222505   60933 buildroot.go:189] setting minikube options for container-runtime
	I1216 20:59:56.222706   60933 config.go:182] Loaded profile config "old-k8s-version-847766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1216 20:59:56.222790   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:56.225376   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.225752   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.225779   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.225965   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:56.226182   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.226363   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.226516   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:56.226687   60933 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:56.226887   60933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I1216 20:59:56.226906   60933 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 20:59:56.451434   60933 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 20:59:56.451464   60933 machine.go:96] duration metric: took 966.463181ms to provisionDockerMachine
	I1216 20:59:56.451478   60933 start.go:293] postStartSetup for "old-k8s-version-847766" (driver="kvm2")
	I1216 20:59:56.451513   60933 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 20:59:56.451541   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:56.451926   60933 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 20:59:56.451980   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:56.454840   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.455302   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.455331   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.455454   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:56.455661   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.455814   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:56.455988   60933 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa Username:docker}
	I1216 20:59:56.542904   60933 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 20:59:56.547362   60933 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 20:59:56.547389   60933 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/addons for local assets ...
	I1216 20:59:56.547467   60933 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/files for local assets ...
	I1216 20:59:56.547568   60933 filesync.go:149] local asset: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem -> 142542.pem in /etc/ssl/certs
	I1216 20:59:56.547677   60933 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 20:59:56.557902   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /etc/ssl/certs/142542.pem (1708 bytes)
	I1216 20:59:56.582796   60933 start.go:296] duration metric: took 131.303406ms for postStartSetup
	I1216 20:59:56.582846   60933 fix.go:56] duration metric: took 19.442354832s for fixHost
	I1216 20:59:56.582872   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:56.585478   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.585803   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.585831   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.586011   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:56.586194   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.586358   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.586472   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:56.586640   60933 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:56.586809   60933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I1216 20:59:56.586819   60933 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 20:59:56.696254   60933 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734382796.650794736
	
	I1216 20:59:56.696274   60933 fix.go:216] guest clock: 1734382796.650794736
	I1216 20:59:56.696281   60933 fix.go:229] Guest: 2024-12-16 20:59:56.650794736 +0000 UTC Remote: 2024-12-16 20:59:56.582851742 +0000 UTC m=+262.230512454 (delta=67.942994ms)
	I1216 20:59:56.696299   60933 fix.go:200] guest clock delta is within tolerance: 67.942994ms
	I1216 20:59:56.696304   60933 start.go:83] releasing machines lock for "old-k8s-version-847766", held for 19.555844424s
	I1216 20:59:56.696333   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:56.696608   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetIP
	I1216 20:59:56.699486   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.699932   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.699964   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.700068   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:56.700645   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:56.700846   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:56.700948   60933 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 20:59:56.701007   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:56.701115   60933 ssh_runner.go:195] Run: cat /version.json
	I1216 20:59:56.701140   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:56.703937   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.704117   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.704314   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.704342   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.704496   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:56.704567   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.704601   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.704680   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.704762   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:56.704836   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:56.704982   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.704987   60933 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa Username:docker}
	I1216 20:59:56.705134   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:56.705259   60933 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa Username:docker}
	I1216 20:59:56.784295   60933 ssh_runner.go:195] Run: systemctl --version
	I1216 20:59:56.817481   60933 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 20:59:56.968124   60933 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 20:59:56.979827   60933 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 20:59:56.979892   60933 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 20:59:56.997867   60933 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 20:59:56.997891   60933 start.go:495] detecting cgroup driver to use...
	I1216 20:59:56.997954   60933 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 20:59:57.016064   60933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 20:59:57.031596   60933 docker.go:217] disabling cri-docker service (if available) ...
	I1216 20:59:57.031665   60933 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 20:59:57.047562   60933 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 20:59:57.062737   60933 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 20:59:57.183918   60933 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 20:59:57.354699   60933 docker.go:233] disabling docker service ...
	I1216 20:59:57.354794   60933 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 20:59:57.373311   60933 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 20:59:57.390014   60933 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 20:59:57.523623   60933 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 20:59:57.656261   60933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 20:59:57.671374   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 20:59:57.692647   60933 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1216 20:59:57.692709   60933 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:57.704496   60933 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 20:59:57.704548   60933 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:57.715848   60933 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:57.727022   60933 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:57.738899   60933 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 20:59:57.756457   60933 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 20:59:57.773236   60933 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 20:59:57.773289   60933 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 20:59:57.789209   60933 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 20:59:57.800881   60933 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:59:57.927794   60933 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 20:59:58.038173   60933 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 20:59:58.038256   60933 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 20:59:58.044633   60933 start.go:563] Will wait 60s for crictl version
	I1216 20:59:58.044705   60933 ssh_runner.go:195] Run: which crictl
	I1216 20:59:58.048781   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 20:59:58.088449   60933 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 20:59:58.088579   60933 ssh_runner.go:195] Run: crio --version
	I1216 20:59:58.119211   60933 ssh_runner.go:195] Run: crio --version
	I1216 20:59:58.151411   60933 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1216 20:59:58.152582   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetIP
	I1216 20:59:58.155196   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:58.155558   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:58.155587   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:58.155763   60933 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1216 20:59:58.160369   60933 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 20:59:58.174013   60933 kubeadm.go:883] updating cluster {Name:old-k8s-version-847766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-847766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 20:59:58.174155   60933 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1216 20:59:58.174212   60933 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 20:59:58.226674   60933 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1216 20:59:58.226747   60933 ssh_runner.go:195] Run: which lz4
	I1216 20:59:58.231330   60933 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 20:59:58.236178   60933 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 20:59:58.236214   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1216 21:00:00.100507   60933 crio.go:462] duration metric: took 1.869217257s to copy over tarball
	I1216 21:00:00.100619   60933 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 21:00:03.581430   60933 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.480755636s)
	I1216 21:00:03.581469   60933 crio.go:469] duration metric: took 3.480924144s to extract the tarball
	I1216 21:00:03.581478   60933 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 21:00:03.627932   60933 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 21:00:03.667985   60933 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1216 21:00:03.668013   60933 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1216 21:00:03.668078   60933 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 21:00:03.668110   60933 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:03.668207   60933 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:03.668262   60933 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1216 21:00:03.668262   60933 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:03.668332   60933 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1216 21:00:03.668215   60933 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:03.668092   60933 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:03.670096   60933 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:03.670294   60933 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:03.670305   60933 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:03.670305   60933 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:03.670333   60933 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1216 21:00:03.670394   60933 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1216 21:00:03.670396   60933 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 21:00:03.670467   60933 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:03.861573   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1216 21:00:03.869704   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:03.885911   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:03.904748   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1216 21:00:03.905328   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:03.906138   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:03.936548   60933 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1216 21:00:03.936658   60933 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1216 21:00:03.936736   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.019039   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:04.033811   60933 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1216 21:00:04.033863   60933 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:04.033927   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.082946   60933 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1216 21:00:04.082995   60933 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:04.083008   60933 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1216 21:00:04.083050   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.083055   60933 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1216 21:00:04.083063   60933 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1216 21:00:04.083073   60933 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:04.083133   60933 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1216 21:00:04.083203   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1216 21:00:04.083205   60933 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:04.083306   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.083145   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.083139   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.123434   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:04.123702   60933 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1216 21:00:04.123740   60933 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:04.123786   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.150878   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 21:00:04.155586   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:04.155774   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:04.155877   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1216 21:00:04.155968   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:04.156205   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1216 21:00:04.226110   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:04.226429   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:04.457220   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:04.457399   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:04.457507   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:04.457596   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1216 21:00:04.457687   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1216 21:00:04.613834   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:04.613870   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:04.613923   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:04.613931   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:04.613960   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:04.613972   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1216 21:00:04.619915   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1216 21:00:04.791265   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1216 21:00:04.791297   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:04.791315   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1216 21:00:04.791352   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1216 21:00:04.791366   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1216 21:00:04.791384   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1216 21:00:04.836463   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1216 21:00:04.836536   60933 cache_images.go:92] duration metric: took 1.168508622s to LoadCachedImages
	W1216 21:00:04.836633   60933 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I1216 21:00:04.836649   60933 kubeadm.go:934] updating node { 192.168.72.240 8443 v1.20.0 crio true true} ...
	I1216 21:00:04.836781   60933 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-847766 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-847766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 21:00:04.836877   60933 ssh_runner.go:195] Run: crio config
	I1216 21:00:04.898330   60933 cni.go:84] Creating CNI manager for ""
	I1216 21:00:04.898357   60933 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:00:04.898371   60933 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 21:00:04.898396   60933 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.240 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-847766 NodeName:old-k8s-version-847766 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1216 21:00:04.898560   60933 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-847766"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.240
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.240"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 21:00:04.898643   60933 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1216 21:00:04.910946   60933 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 21:00:04.911045   60933 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 21:00:04.923199   60933 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1216 21:00:04.942705   60933 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 21:00:04.976598   60933 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1216 21:00:05.001967   60933 ssh_runner.go:195] Run: grep 192.168.72.240	control-plane.minikube.internal$ /etc/hosts
	I1216 21:00:05.006819   60933 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.240	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 21:00:05.020604   60933 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 21:00:05.143039   60933 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 21:00:05.162507   60933 certs.go:68] Setting up /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766 for IP: 192.168.72.240
	I1216 21:00:05.162535   60933 certs.go:194] generating shared ca certs ...
	I1216 21:00:05.162554   60933 certs.go:226] acquiring lock for ca certs: {Name:mk7f8f83a04be3d39897a025f51d4d8228b5a509 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:00:05.162749   60933 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key
	I1216 21:00:05.162792   60933 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key
	I1216 21:00:05.162803   60933 certs.go:256] generating profile certs ...
	I1216 21:00:05.162907   60933 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/client.key
	I1216 21:00:05.162976   60933 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.key.6c8704df
	I1216 21:00:05.163011   60933 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/proxy-client.key
	I1216 21:00:05.163148   60933 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem (1338 bytes)
	W1216 21:00:05.163176   60933 certs.go:480] ignoring /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254_empty.pem, impossibly tiny 0 bytes
	I1216 21:00:05.163186   60933 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 21:00:05.163210   60933 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem (1082 bytes)
	I1216 21:00:05.163278   60933 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem (1123 bytes)
	I1216 21:00:05.163315   60933 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem (1679 bytes)
	I1216 21:00:05.163379   60933 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem (1708 bytes)
	I1216 21:00:05.164216   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 21:00:05.222491   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 21:00:05.253517   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 21:00:05.294338   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 21:00:05.342850   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1216 21:00:05.388068   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 21:00:05.422591   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 21:00:05.471916   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 21:00:05.505836   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem --> /usr/share/ca-certificates/14254.pem (1338 bytes)
	I1216 21:00:05.539404   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /usr/share/ca-certificates/142542.pem (1708 bytes)
	I1216 21:00:05.570819   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 21:00:05.602079   60933 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 21:00:05.630577   60933 ssh_runner.go:195] Run: openssl version
	I1216 21:00:05.640017   60933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142542.pem && ln -fs /usr/share/ca-certificates/142542.pem /etc/ssl/certs/142542.pem"
	I1216 21:00:05.653759   60933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142542.pem
	I1216 21:00:05.659573   60933 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 19:42 /usr/share/ca-certificates/142542.pem
	I1216 21:00:05.659645   60933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142542.pem
	I1216 21:00:05.666667   60933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142542.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 21:00:05.680061   60933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 21:00:05.692776   60933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:00:05.698644   60933 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:00:05.698728   60933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:00:05.705913   60933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 21:00:05.730062   60933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14254.pem && ln -fs /usr/share/ca-certificates/14254.pem /etc/ssl/certs/14254.pem"
	I1216 21:00:05.750034   60933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14254.pem
	I1216 21:00:05.757158   60933 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 19:42 /usr/share/ca-certificates/14254.pem
	I1216 21:00:05.757252   60933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14254.pem
	I1216 21:00:05.765679   60933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14254.pem /etc/ssl/certs/51391683.0"
	I1216 21:00:05.782537   60933 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 21:00:05.788291   60933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 21:00:05.797707   60933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 21:00:05.807016   60933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 21:00:05.818160   60933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 21:00:05.827428   60933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 21:00:05.836499   60933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 21:00:05.846104   60933 kubeadm.go:392] StartCluster: {Name:old-k8s-version-847766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-847766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 21:00:05.846244   60933 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 21:00:05.846331   60933 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 21:00:05.901274   60933 cri.go:89] found id: ""
	I1216 21:00:05.901376   60933 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 21:00:05.917353   60933 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1216 21:00:05.917381   60933 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1216 21:00:05.917439   60933 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 21:00:05.928587   60933 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 21:00:05.932546   60933 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-847766" does not appear in /home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 21:00:05.933844   60933 kubeconfig.go:62] /home/jenkins/minikube-integration/20091-7083/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-847766" cluster setting kubeconfig missing "old-k8s-version-847766" context setting]
	I1216 21:00:05.935400   60933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/kubeconfig: {Name:mk67073c6dc9abd712825d4490d6430745897f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:00:05.938054   60933 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 21:00:05.950384   60933 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.240
	I1216 21:00:05.950433   60933 kubeadm.go:1160] stopping kube-system containers ...
	I1216 21:00:05.950450   60933 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 21:00:05.950515   60933 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 21:00:05.999495   60933 cri.go:89] found id: ""
	I1216 21:00:05.999588   60933 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 21:00:06.024765   60933 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:00:06.037807   60933 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:00:06.037836   60933 kubeadm.go:157] found existing configuration files:
	
	I1216 21:00:06.037894   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 21:00:06.048926   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:00:06.048997   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:00:06.060167   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 21:00:06.070841   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:00:06.070910   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:00:06.083517   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 21:00:06.099124   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:00:06.099214   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:00:06.110004   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 21:00:06.125600   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:00:06.125668   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:00:06.137212   60933 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 21:00:06.148873   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:06.316611   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:07.220187   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:07.549730   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:07.698864   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:07.815495   60933 api_server.go:52] waiting for apiserver process to appear ...
	I1216 21:00:07.815657   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:08.316003   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:08.816482   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:09.315805   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:09.815863   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:10.316664   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:10.815852   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:11.316175   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:11.816446   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:12.316040   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:12.816172   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:13.316460   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:13.815700   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:14.316469   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:14.816539   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:15.315737   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:15.816465   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:16.316470   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:16.816451   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:17.316485   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:17.816470   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:18.316165   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:18.816448   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:19.315972   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:19.815807   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:20.316465   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:20.816461   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:21.316731   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:21.816637   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:22.315727   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:22.816447   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:23.316510   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:23.816408   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:24.316454   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:24.816467   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:25.315789   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:25.816410   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:26.316537   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:26.816144   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:27.316659   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:27.816126   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:28.316568   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:28.816151   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:29.316485   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:29.816510   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:30.315756   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:30.815774   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:31.316516   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:31.816503   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:32.316499   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:32.816455   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:33.316478   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:33.816363   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:34.316057   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:34.815839   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:35.316503   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:35.816590   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:36.316231   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:36.816011   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:37.316485   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:37.816494   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:38.316486   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:38.816475   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:39.315762   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:39.816009   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:40.316444   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:40.816493   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:41.315869   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:41.816495   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:42.316034   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:42.816422   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:43.316432   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:43.815875   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:44.316036   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:44.816293   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:45.316458   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:45.815992   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:46.316054   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:46.816449   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:47.316113   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:47.816514   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:48.316353   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:48.816144   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:49.316435   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:49.815935   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:50.316437   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:50.816335   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:51.315747   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:51.816504   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:52.315695   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:52.816115   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:53.316498   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:53.816529   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:54.315689   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:54.816019   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:55.316484   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:55.816517   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:56.315858   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:56.816306   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:57.316447   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:57.815879   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:58.316493   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:58.816395   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:59.316225   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:59.816440   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:00.315769   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:00.816285   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:01.316020   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:01.818175   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:02.315780   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:02.816411   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:03.315758   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:03.815810   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:04.316731   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:04.816470   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:05.316528   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:05.815792   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:06.316491   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:06.815977   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:07.316002   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:07.816043   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:07.816114   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:07.861866   60933 cri.go:89] found id: ""
	I1216 21:01:07.861896   60933 logs.go:282] 0 containers: []
	W1216 21:01:07.861906   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:07.861913   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:07.861978   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:07.905674   60933 cri.go:89] found id: ""
	I1216 21:01:07.905700   60933 logs.go:282] 0 containers: []
	W1216 21:01:07.905707   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:07.905713   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:07.905798   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:07.949936   60933 cri.go:89] found id: ""
	I1216 21:01:07.949966   60933 logs.go:282] 0 containers: []
	W1216 21:01:07.949977   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:07.949984   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:07.950048   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:07.987196   60933 cri.go:89] found id: ""
	I1216 21:01:07.987223   60933 logs.go:282] 0 containers: []
	W1216 21:01:07.987232   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:07.987237   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:07.987341   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:08.033126   60933 cri.go:89] found id: ""
	I1216 21:01:08.033156   60933 logs.go:282] 0 containers: []
	W1216 21:01:08.033168   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:08.033176   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:08.033252   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:08.072223   60933 cri.go:89] found id: ""
	I1216 21:01:08.072257   60933 logs.go:282] 0 containers: []
	W1216 21:01:08.072270   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:08.072278   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:08.072345   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:08.117257   60933 cri.go:89] found id: ""
	I1216 21:01:08.117288   60933 logs.go:282] 0 containers: []
	W1216 21:01:08.117299   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:08.117319   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:08.117389   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:08.158059   60933 cri.go:89] found id: ""
	I1216 21:01:08.158096   60933 logs.go:282] 0 containers: []
	W1216 21:01:08.158106   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:08.158119   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:08.158133   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:08.232930   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:08.232966   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:08.277173   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:08.277204   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:08.331763   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:08.331802   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:08.346150   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:08.346178   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:08.488668   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:10.989383   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:11.003162   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:11.003266   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:11.040432   60933 cri.go:89] found id: ""
	I1216 21:01:11.040464   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.040475   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:11.040483   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:11.040547   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:11.083083   60933 cri.go:89] found id: ""
	I1216 21:01:11.083110   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.083117   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:11.083122   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:11.083182   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:11.122842   60933 cri.go:89] found id: ""
	I1216 21:01:11.122880   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.122893   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:11.122900   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:11.122969   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:11.168227   60933 cri.go:89] found id: ""
	I1216 21:01:11.168268   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.168279   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:11.168286   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:11.168359   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:11.218660   60933 cri.go:89] found id: ""
	I1216 21:01:11.218689   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.218701   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:11.218708   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:11.218774   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:11.281179   60933 cri.go:89] found id: ""
	I1216 21:01:11.281214   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.281227   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:11.281236   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:11.281315   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:11.326419   60933 cri.go:89] found id: ""
	I1216 21:01:11.326453   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.326464   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:11.326472   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:11.326535   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:11.368825   60933 cri.go:89] found id: ""
	I1216 21:01:11.368863   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.368875   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:11.368887   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:11.368905   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:11.454848   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:11.454869   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:11.454888   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:11.541685   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:11.541724   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:11.581804   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:11.581830   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:11.635800   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:11.635838   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:14.152441   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:14.167637   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:14.167720   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:14.206685   60933 cri.go:89] found id: ""
	I1216 21:01:14.206716   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.206728   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:14.206735   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:14.206796   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:14.248126   60933 cri.go:89] found id: ""
	I1216 21:01:14.248151   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.248159   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:14.248165   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:14.248215   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:14.285030   60933 cri.go:89] found id: ""
	I1216 21:01:14.285067   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.285079   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:14.285086   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:14.285151   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:14.325706   60933 cri.go:89] found id: ""
	I1216 21:01:14.325736   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.325747   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:14.325755   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:14.325820   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:14.369447   60933 cri.go:89] found id: ""
	I1216 21:01:14.369475   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.369486   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:14.369494   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:14.369557   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:14.407792   60933 cri.go:89] found id: ""
	I1216 21:01:14.407818   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.407826   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:14.407832   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:14.407890   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:14.448380   60933 cri.go:89] found id: ""
	I1216 21:01:14.448411   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.448419   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:14.448424   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:14.448473   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:14.487116   60933 cri.go:89] found id: ""
	I1216 21:01:14.487144   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.487154   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:14.487164   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:14.487177   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:14.547342   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:14.547390   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:14.563385   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:14.563424   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:14.637363   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:14.637394   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:14.637410   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:14.715586   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:14.715626   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:17.258974   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:17.273896   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:17.273970   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:17.317359   60933 cri.go:89] found id: ""
	I1216 21:01:17.317394   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.317405   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:17.317412   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:17.317476   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:17.361422   60933 cri.go:89] found id: ""
	I1216 21:01:17.361451   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.361462   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:17.361469   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:17.361568   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:17.401466   60933 cri.go:89] found id: ""
	I1216 21:01:17.401522   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.401534   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:17.401544   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:17.401614   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:17.439560   60933 cri.go:89] found id: ""
	I1216 21:01:17.439588   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.439597   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:17.439603   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:17.439655   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:17.480310   60933 cri.go:89] found id: ""
	I1216 21:01:17.480333   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.480340   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:17.480345   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:17.480393   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:17.528562   60933 cri.go:89] found id: ""
	I1216 21:01:17.528589   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.528600   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:17.528607   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:17.528671   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:17.569863   60933 cri.go:89] found id: ""
	I1216 21:01:17.569900   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.569908   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:17.569914   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:17.569975   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:17.610840   60933 cri.go:89] found id: ""
	I1216 21:01:17.610867   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.610875   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:17.610884   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:17.610895   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:17.661002   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:17.661041   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:17.675290   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:17.675318   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:17.743550   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:17.743572   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:17.743584   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:17.824479   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:17.824524   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:20.373687   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:20.389149   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:20.389244   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:20.429594   60933 cri.go:89] found id: ""
	I1216 21:01:20.429626   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.429634   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:20.429639   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:20.429693   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:20.473157   60933 cri.go:89] found id: ""
	I1216 21:01:20.473185   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.473193   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:20.473198   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:20.473264   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:20.512549   60933 cri.go:89] found id: ""
	I1216 21:01:20.512586   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.512597   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:20.512604   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:20.512676   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:20.549275   60933 cri.go:89] found id: ""
	I1216 21:01:20.549310   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.549323   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:20.549344   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:20.549408   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:20.587405   60933 cri.go:89] found id: ""
	I1216 21:01:20.587435   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.587443   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:20.587449   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:20.587515   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:20.625364   60933 cri.go:89] found id: ""
	I1216 21:01:20.625393   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.625400   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:20.625406   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:20.625456   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:20.664018   60933 cri.go:89] found id: ""
	I1216 21:01:20.664050   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.664061   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:20.664068   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:20.664117   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:20.703860   60933 cri.go:89] found id: ""
	I1216 21:01:20.703890   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.703898   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:20.703906   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:20.703918   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:20.754433   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:20.754470   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:20.770136   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:20.770172   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:20.854025   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:20.854049   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:20.854061   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:20.939628   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:20.939661   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:23.489645   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:23.503603   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:23.503667   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:23.543044   60933 cri.go:89] found id: ""
	I1216 21:01:23.543070   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.543077   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:23.543083   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:23.543131   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:23.580333   60933 cri.go:89] found id: ""
	I1216 21:01:23.580362   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.580371   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:23.580377   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:23.580428   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:23.616732   60933 cri.go:89] found id: ""
	I1216 21:01:23.616766   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.616778   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:23.616785   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:23.616834   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:23.655771   60933 cri.go:89] found id: ""
	I1216 21:01:23.655793   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.655801   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:23.655807   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:23.655861   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:23.694400   60933 cri.go:89] found id: ""
	I1216 21:01:23.694430   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.694437   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:23.694443   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:23.694500   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:23.732592   60933 cri.go:89] found id: ""
	I1216 21:01:23.732622   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.732630   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:23.732636   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:23.732688   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:23.769752   60933 cri.go:89] found id: ""
	I1216 21:01:23.769787   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.769801   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:23.769810   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:23.769892   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:23.806891   60933 cri.go:89] found id: ""
	I1216 21:01:23.806925   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.806936   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:23.806947   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:23.806963   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:23.822887   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:23.822912   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:23.898795   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:23.898817   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:23.898830   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:23.978036   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:23.978073   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:24.032500   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:24.032528   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:26.585937   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:26.599470   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:26.599543   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:26.635421   60933 cri.go:89] found id: ""
	I1216 21:01:26.635446   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.635455   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:26.635461   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:26.635527   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:26.675347   60933 cri.go:89] found id: ""
	I1216 21:01:26.675379   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.675390   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:26.675397   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:26.675464   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:26.715444   60933 cri.go:89] found id: ""
	I1216 21:01:26.715469   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.715480   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:26.715541   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:26.715619   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:26.753841   60933 cri.go:89] found id: ""
	I1216 21:01:26.753874   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.753893   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:26.753901   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:26.753963   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:26.791427   60933 cri.go:89] found id: ""
	I1216 21:01:26.791453   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.791464   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:26.791473   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:26.791539   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:26.832772   60933 cri.go:89] found id: ""
	I1216 21:01:26.832804   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.832816   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:26.832823   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:26.832887   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:26.869963   60933 cri.go:89] found id: ""
	I1216 21:01:26.869990   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.869997   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:26.870003   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:26.870068   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:26.906792   60933 cri.go:89] found id: ""
	I1216 21:01:26.906821   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.906862   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:26.906875   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:26.906894   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:26.994820   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:26.994863   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:27.034642   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:27.034686   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:27.089128   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:27.089168   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:27.104368   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:27.104401   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:27.179852   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:29.681052   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:29.695376   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:29.695464   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:29.735562   60933 cri.go:89] found id: ""
	I1216 21:01:29.735588   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.735596   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:29.735602   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:29.735650   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:29.772635   60933 cri.go:89] found id: ""
	I1216 21:01:29.772663   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.772672   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:29.772678   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:29.772737   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:29.810471   60933 cri.go:89] found id: ""
	I1216 21:01:29.810499   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.810509   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:29.810516   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:29.810575   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:29.845917   60933 cri.go:89] found id: ""
	I1216 21:01:29.845952   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.845966   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:29.845975   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:29.846048   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:29.883866   60933 cri.go:89] found id: ""
	I1216 21:01:29.883892   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.883900   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:29.883906   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:29.883968   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:29.920696   60933 cri.go:89] found id: ""
	I1216 21:01:29.920729   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.920740   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:29.920748   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:29.920831   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:29.957977   60933 cri.go:89] found id: ""
	I1216 21:01:29.958056   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.958069   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:29.958079   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:29.958144   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:29.995436   60933 cri.go:89] found id: ""
	I1216 21:01:29.995464   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.995472   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:29.995481   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:29.995492   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:30.046819   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:30.046859   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:30.062754   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:30.062807   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:30.138932   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:30.138959   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:30.138975   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:30.225720   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:30.225768   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:32.768185   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:32.782642   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:32.782729   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:32.821995   60933 cri.go:89] found id: ""
	I1216 21:01:32.822029   60933 logs.go:282] 0 containers: []
	W1216 21:01:32.822040   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:32.822048   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:32.822112   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:32.858453   60933 cri.go:89] found id: ""
	I1216 21:01:32.858487   60933 logs.go:282] 0 containers: []
	W1216 21:01:32.858497   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:32.858504   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:32.858570   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:32.896269   60933 cri.go:89] found id: ""
	I1216 21:01:32.896304   60933 logs.go:282] 0 containers: []
	W1216 21:01:32.896316   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:32.896323   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:32.896384   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:32.936795   60933 cri.go:89] found id: ""
	I1216 21:01:32.936820   60933 logs.go:282] 0 containers: []
	W1216 21:01:32.936832   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:32.936838   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:32.936904   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:32.974779   60933 cri.go:89] found id: ""
	I1216 21:01:32.974810   60933 logs.go:282] 0 containers: []
	W1216 21:01:32.974821   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:32.974828   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:32.974892   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:33.012201   60933 cri.go:89] found id: ""
	I1216 21:01:33.012226   60933 logs.go:282] 0 containers: []
	W1216 21:01:33.012234   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:33.012239   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:33.012287   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:33.049777   60933 cri.go:89] found id: ""
	I1216 21:01:33.049803   60933 logs.go:282] 0 containers: []
	W1216 21:01:33.049811   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:33.049816   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:33.049873   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:33.087820   60933 cri.go:89] found id: ""
	I1216 21:01:33.087851   60933 logs.go:282] 0 containers: []
	W1216 21:01:33.087859   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:33.087870   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:33.087885   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:33.140816   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:33.140854   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:33.154817   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:33.154855   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:33.231445   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:33.231474   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:33.231496   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:33.311547   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:33.311586   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:35.855686   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:35.870404   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:35.870485   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:35.908175   60933 cri.go:89] found id: ""
	I1216 21:01:35.908204   60933 logs.go:282] 0 containers: []
	W1216 21:01:35.908215   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:35.908222   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:35.908284   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:35.955456   60933 cri.go:89] found id: ""
	I1216 21:01:35.955483   60933 logs.go:282] 0 containers: []
	W1216 21:01:35.955494   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:35.955501   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:35.955562   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:35.995170   60933 cri.go:89] found id: ""
	I1216 21:01:35.995201   60933 logs.go:282] 0 containers: []
	W1216 21:01:35.995211   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:35.995218   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:35.995305   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:36.033729   60933 cri.go:89] found id: ""
	I1216 21:01:36.033758   60933 logs.go:282] 0 containers: []
	W1216 21:01:36.033769   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:36.033776   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:36.033840   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:36.072756   60933 cri.go:89] found id: ""
	I1216 21:01:36.072787   60933 logs.go:282] 0 containers: []
	W1216 21:01:36.072799   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:36.072806   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:36.072873   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:36.112149   60933 cri.go:89] found id: ""
	I1216 21:01:36.112187   60933 logs.go:282] 0 containers: []
	W1216 21:01:36.112198   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:36.112205   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:36.112258   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:36.148742   60933 cri.go:89] found id: ""
	I1216 21:01:36.148770   60933 logs.go:282] 0 containers: []
	W1216 21:01:36.148781   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:36.148789   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:36.148855   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:36.192827   60933 cri.go:89] found id: ""
	I1216 21:01:36.192864   60933 logs.go:282] 0 containers: []
	W1216 21:01:36.192875   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:36.192886   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:36.192901   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:36.243822   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:36.243867   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:36.258258   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:36.258292   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:36.342847   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:36.342876   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:36.342891   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:36.424741   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:36.424780   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:38.967334   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:38.982208   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:38.982283   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:39.023903   60933 cri.go:89] found id: ""
	I1216 21:01:39.023931   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.023939   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:39.023945   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:39.023997   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:39.070314   60933 cri.go:89] found id: ""
	I1216 21:01:39.070342   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.070351   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:39.070359   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:39.070423   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:39.115081   60933 cri.go:89] found id: ""
	I1216 21:01:39.115106   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.115113   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:39.115119   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:39.115178   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:39.151933   60933 cri.go:89] found id: ""
	I1216 21:01:39.151959   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.151967   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:39.151972   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:39.152022   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:39.192280   60933 cri.go:89] found id: ""
	I1216 21:01:39.192307   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.192315   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:39.192322   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:39.192370   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:39.228792   60933 cri.go:89] found id: ""
	I1216 21:01:39.228814   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.228822   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:39.228827   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:39.228887   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:39.266823   60933 cri.go:89] found id: ""
	I1216 21:01:39.266847   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.266854   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:39.266860   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:39.266908   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:39.301317   60933 cri.go:89] found id: ""
	I1216 21:01:39.301340   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.301348   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:39.301361   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:39.301372   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:39.386615   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:39.386663   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:39.433079   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:39.433112   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:39.489422   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:39.489458   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:39.504223   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:39.504259   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:39.587898   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:42.088900   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:42.103768   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:42.103854   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:42.141956   60933 cri.go:89] found id: ""
	I1216 21:01:42.142026   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.142040   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:42.142049   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:42.142117   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:42.178754   60933 cri.go:89] found id: ""
	I1216 21:01:42.178782   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.178818   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:42.178833   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:42.178891   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:42.215872   60933 cri.go:89] found id: ""
	I1216 21:01:42.215905   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.215916   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:42.215923   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:42.215991   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:42.253854   60933 cri.go:89] found id: ""
	I1216 21:01:42.253885   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.253896   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:42.253904   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:42.253972   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:42.290963   60933 cri.go:89] found id: ""
	I1216 21:01:42.291008   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.291023   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:42.291039   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:42.291109   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:42.332920   60933 cri.go:89] found id: ""
	I1216 21:01:42.332946   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.332953   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:42.332959   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:42.333006   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:42.375060   60933 cri.go:89] found id: ""
	I1216 21:01:42.375093   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.375104   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:42.375112   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:42.375189   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:42.416593   60933 cri.go:89] found id: ""
	I1216 21:01:42.416621   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.416631   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:42.416639   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:42.416651   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:42.475204   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:42.475260   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:42.491022   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:42.491057   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:42.566645   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:42.566672   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:42.566687   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:42.646815   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:42.646856   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:45.191912   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:45.205487   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:45.205548   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:45.245350   60933 cri.go:89] found id: ""
	I1216 21:01:45.245389   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.245397   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:45.245404   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:45.245482   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:45.302126   60933 cri.go:89] found id: ""
	I1216 21:01:45.302158   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.302171   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:45.302178   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:45.302251   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:45.342888   60933 cri.go:89] found id: ""
	I1216 21:01:45.342917   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.342932   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:45.342937   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:45.342990   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:45.381545   60933 cri.go:89] found id: ""
	I1216 21:01:45.381574   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.381585   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:45.381593   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:45.381652   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:45.418081   60933 cri.go:89] found id: ""
	I1216 21:01:45.418118   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.418131   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:45.418138   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:45.418207   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:45.458610   60933 cri.go:89] found id: ""
	I1216 21:01:45.458637   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.458647   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:45.458655   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:45.458713   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:45.500102   60933 cri.go:89] found id: ""
	I1216 21:01:45.500137   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.500148   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:45.500155   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:45.500217   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:45.542074   60933 cri.go:89] found id: ""
	I1216 21:01:45.542103   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.542113   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:45.542122   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:45.542134   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:45.597577   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:45.597614   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:45.614028   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:45.614075   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:45.693014   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:45.693039   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:45.693056   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:45.772260   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:45.772295   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:48.317073   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:48.332176   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:48.332242   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:48.369946   60933 cri.go:89] found id: ""
	I1216 21:01:48.369976   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.369988   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:48.369994   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:48.370059   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:48.407628   60933 cri.go:89] found id: ""
	I1216 21:01:48.407661   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.407672   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:48.407680   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:48.407742   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:48.444377   60933 cri.go:89] found id: ""
	I1216 21:01:48.444403   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.444411   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:48.444416   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:48.444467   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:48.485674   60933 cri.go:89] found id: ""
	I1216 21:01:48.485710   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.485722   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:48.485730   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:48.485785   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:48.530577   60933 cri.go:89] found id: ""
	I1216 21:01:48.530610   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.530621   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:48.530628   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:48.530693   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:48.567128   60933 cri.go:89] found id: ""
	I1216 21:01:48.567151   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.567159   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:48.567165   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:48.567216   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:48.603294   60933 cri.go:89] found id: ""
	I1216 21:01:48.603320   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.603327   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:48.603333   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:48.603392   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:48.646221   60933 cri.go:89] found id: ""
	I1216 21:01:48.646253   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.646265   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:48.646288   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:48.646318   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:48.697589   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:48.697624   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:48.711916   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:48.711947   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:48.789068   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:48.789097   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:48.789113   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:48.872340   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:48.872378   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:51.418176   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:51.434851   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:51.434948   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:51.478935   60933 cri.go:89] found id: ""
	I1216 21:01:51.478963   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.478975   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:51.478982   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:51.479043   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:51.524581   60933 cri.go:89] found id: ""
	I1216 21:01:51.524611   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.524622   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:51.524629   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:51.524686   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:51.563479   60933 cri.go:89] found id: ""
	I1216 21:01:51.563507   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.563516   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:51.563521   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:51.563578   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:51.601931   60933 cri.go:89] found id: ""
	I1216 21:01:51.601964   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.601975   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:51.601982   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:51.602044   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:51.638984   60933 cri.go:89] found id: ""
	I1216 21:01:51.639014   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.639025   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:51.639032   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:51.639093   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:51.681137   60933 cri.go:89] found id: ""
	I1216 21:01:51.681167   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.681178   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:51.681185   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:51.681263   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:51.722904   60933 cri.go:89] found id: ""
	I1216 21:01:51.722932   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.722941   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:51.722946   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:51.722994   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:51.794403   60933 cri.go:89] found id: ""
	I1216 21:01:51.794434   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.794444   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:51.794453   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:51.794464   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:51.850688   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:51.850724   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:51.866049   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:51.866079   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:51.949844   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:51.949880   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:51.949894   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:52.028981   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:52.029023   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:54.570192   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:54.585405   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:54.585489   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:54.627670   60933 cri.go:89] found id: ""
	I1216 21:01:54.627701   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.627712   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:54.627719   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:54.627782   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:54.671226   60933 cri.go:89] found id: ""
	I1216 21:01:54.671265   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.671276   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:54.671283   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:54.671337   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:54.705549   60933 cri.go:89] found id: ""
	I1216 21:01:54.705581   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.705592   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:54.705600   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:54.705663   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:54.743638   60933 cri.go:89] found id: ""
	I1216 21:01:54.743664   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.743671   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:54.743677   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:54.743728   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:54.781714   60933 cri.go:89] found id: ""
	I1216 21:01:54.781750   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.781760   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:54.781767   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:54.781831   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:54.830808   60933 cri.go:89] found id: ""
	I1216 21:01:54.830840   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.830851   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:54.830859   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:54.830923   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:54.868539   60933 cri.go:89] found id: ""
	I1216 21:01:54.868565   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.868573   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:54.868578   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:54.868626   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:54.906554   60933 cri.go:89] found id: ""
	I1216 21:01:54.906587   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.906595   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:54.906604   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:54.906617   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:54.960664   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:54.960696   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:54.975657   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:54.975686   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:55.052266   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:55.052293   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:55.052320   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:55.137894   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:55.137937   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:57.682769   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:57.699102   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:57.699184   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:57.764651   60933 cri.go:89] found id: ""
	I1216 21:01:57.764684   60933 logs.go:282] 0 containers: []
	W1216 21:01:57.764692   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:57.764698   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:57.764755   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:57.805358   60933 cri.go:89] found id: ""
	I1216 21:01:57.805385   60933 logs.go:282] 0 containers: []
	W1216 21:01:57.805395   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:57.805404   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:57.805474   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:57.843589   60933 cri.go:89] found id: ""
	I1216 21:01:57.843623   60933 logs.go:282] 0 containers: []
	W1216 21:01:57.843634   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:57.843644   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:57.843716   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:57.881725   60933 cri.go:89] found id: ""
	I1216 21:01:57.881748   60933 logs.go:282] 0 containers: []
	W1216 21:01:57.881756   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:57.881761   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:57.881811   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:57.922252   60933 cri.go:89] found id: ""
	I1216 21:01:57.922293   60933 logs.go:282] 0 containers: []
	W1216 21:01:57.922305   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:57.922322   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:57.922385   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:57.962532   60933 cri.go:89] found id: ""
	I1216 21:01:57.962555   60933 logs.go:282] 0 containers: []
	W1216 21:01:57.962562   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:57.962567   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:57.962615   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:58.002021   60933 cri.go:89] found id: ""
	I1216 21:01:58.002056   60933 logs.go:282] 0 containers: []
	W1216 21:01:58.002067   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:58.002074   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:58.002137   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:58.035648   60933 cri.go:89] found id: ""
	I1216 21:01:58.035672   60933 logs.go:282] 0 containers: []
	W1216 21:01:58.035680   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:58.035688   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:58.035699   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:58.116142   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:58.116177   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:58.157683   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:58.157717   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:58.211686   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:58.211722   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:58.226385   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:58.226409   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:58.302287   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:00.802544   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:00.816325   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:00.816405   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:00.853031   60933 cri.go:89] found id: ""
	I1216 21:02:00.853057   60933 logs.go:282] 0 containers: []
	W1216 21:02:00.853065   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:00.853070   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:00.853122   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:00.891040   60933 cri.go:89] found id: ""
	I1216 21:02:00.891071   60933 logs.go:282] 0 containers: []
	W1216 21:02:00.891082   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:00.891089   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:00.891151   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:00.929145   60933 cri.go:89] found id: ""
	I1216 21:02:00.929168   60933 logs.go:282] 0 containers: []
	W1216 21:02:00.929175   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:00.929181   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:00.929227   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:00.976469   60933 cri.go:89] found id: ""
	I1216 21:02:00.976492   60933 logs.go:282] 0 containers: []
	W1216 21:02:00.976500   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:00.976505   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:00.976553   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:01.015053   60933 cri.go:89] found id: ""
	I1216 21:02:01.015078   60933 logs.go:282] 0 containers: []
	W1216 21:02:01.015086   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:01.015092   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:01.015150   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:01.052859   60933 cri.go:89] found id: ""
	I1216 21:02:01.052891   60933 logs.go:282] 0 containers: []
	W1216 21:02:01.052902   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:01.052909   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:01.053028   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:01.091209   60933 cri.go:89] found id: ""
	I1216 21:02:01.091238   60933 logs.go:282] 0 containers: []
	W1216 21:02:01.091259   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:01.091266   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:01.091341   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:01.127013   60933 cri.go:89] found id: ""
	I1216 21:02:01.127038   60933 logs.go:282] 0 containers: []
	W1216 21:02:01.127047   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:01.127058   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:01.127072   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:01.179642   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:01.179697   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:01.196390   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:01.196416   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:01.275446   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:01.275478   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:01.275493   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:01.354391   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:01.354429   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:03.897672   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:03.911596   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:03.911654   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:03.955700   60933 cri.go:89] found id: ""
	I1216 21:02:03.955726   60933 logs.go:282] 0 containers: []
	W1216 21:02:03.955735   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:03.955741   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:03.955803   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:03.995661   60933 cri.go:89] found id: ""
	I1216 21:02:03.995696   60933 logs.go:282] 0 containers: []
	W1216 21:02:03.995706   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:03.995713   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:03.995772   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:04.031368   60933 cri.go:89] found id: ""
	I1216 21:02:04.031391   60933 logs.go:282] 0 containers: []
	W1216 21:02:04.031398   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:04.031406   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:04.031455   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:04.067633   60933 cri.go:89] found id: ""
	I1216 21:02:04.067659   60933 logs.go:282] 0 containers: []
	W1216 21:02:04.067666   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:04.067671   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:04.067719   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:04.105734   60933 cri.go:89] found id: ""
	I1216 21:02:04.105758   60933 logs.go:282] 0 containers: []
	W1216 21:02:04.105768   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:04.105773   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:04.105824   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:04.146542   60933 cri.go:89] found id: ""
	I1216 21:02:04.146564   60933 logs.go:282] 0 containers: []
	W1216 21:02:04.146571   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:04.146577   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:04.146623   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:04.184433   60933 cri.go:89] found id: ""
	I1216 21:02:04.184462   60933 logs.go:282] 0 containers: []
	W1216 21:02:04.184473   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:04.184480   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:04.184551   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:04.223077   60933 cri.go:89] found id: ""
	I1216 21:02:04.223106   60933 logs.go:282] 0 containers: []
	W1216 21:02:04.223117   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:04.223127   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:04.223140   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:04.279618   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:04.279656   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:04.295841   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:04.295865   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:04.372609   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:04.372632   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:04.372648   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:04.457597   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:04.457631   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:07.006004   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:07.020394   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:07.020537   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:07.064242   60933 cri.go:89] found id: ""
	I1216 21:02:07.064274   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.064283   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:07.064289   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:07.064337   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:07.108865   60933 cri.go:89] found id: ""
	I1216 21:02:07.108899   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.108910   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:07.108917   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:07.108985   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:07.149021   60933 cri.go:89] found id: ""
	I1216 21:02:07.149051   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.149060   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:07.149066   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:07.149120   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:07.187808   60933 cri.go:89] found id: ""
	I1216 21:02:07.187833   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.187843   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:07.187850   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:07.187912   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:07.228748   60933 cri.go:89] found id: ""
	I1216 21:02:07.228774   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.228785   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:07.228792   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:07.228853   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:07.267961   60933 cri.go:89] found id: ""
	I1216 21:02:07.267996   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.268012   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:07.268021   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:07.268099   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:07.312464   60933 cri.go:89] found id: ""
	I1216 21:02:07.312491   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.312498   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:07.312503   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:07.312554   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:07.351902   60933 cri.go:89] found id: ""
	I1216 21:02:07.351933   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.351946   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:07.351958   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:07.351974   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:07.405985   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:07.406050   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:07.420796   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:07.420842   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:07.506527   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:07.506559   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:07.506574   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:07.587965   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:07.588001   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:10.132876   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:10.146785   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:10.146858   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:10.189278   60933 cri.go:89] found id: ""
	I1216 21:02:10.189312   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.189324   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:10.189332   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:10.189402   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:10.228331   60933 cri.go:89] found id: ""
	I1216 21:02:10.228370   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.228378   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:10.228383   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:10.228436   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:10.266424   60933 cri.go:89] found id: ""
	I1216 21:02:10.266458   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.266470   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:10.266478   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:10.266542   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:10.305865   60933 cri.go:89] found id: ""
	I1216 21:02:10.305890   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.305902   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:10.305909   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:10.305968   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:10.344211   60933 cri.go:89] found id: ""
	I1216 21:02:10.344239   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.344247   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:10.344253   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:10.344314   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:10.381939   60933 cri.go:89] found id: ""
	I1216 21:02:10.381993   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.382004   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:10.382011   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:10.382076   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:10.418882   60933 cri.go:89] found id: ""
	I1216 21:02:10.418908   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.418915   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:10.418921   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:10.418972   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:10.458397   60933 cri.go:89] found id: ""
	I1216 21:02:10.458425   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.458434   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:10.458447   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:10.458462   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:10.472152   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:10.472180   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:10.545888   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:10.545913   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:10.545926   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:10.627223   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:10.627293   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:10.676606   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:10.676633   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:13.227283   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:13.242871   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:13.242954   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:13.280676   60933 cri.go:89] found id: ""
	I1216 21:02:13.280711   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.280723   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:13.280731   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:13.280786   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:13.321357   60933 cri.go:89] found id: ""
	I1216 21:02:13.321389   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.321400   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:13.321408   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:13.321474   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:13.359002   60933 cri.go:89] found id: ""
	I1216 21:02:13.359030   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.359042   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:13.359050   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:13.359116   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:13.395879   60933 cri.go:89] found id: ""
	I1216 21:02:13.395922   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.395941   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:13.395950   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:13.396017   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:13.436761   60933 cri.go:89] found id: ""
	I1216 21:02:13.436781   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.436788   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:13.436793   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:13.436852   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:13.478839   60933 cri.go:89] found id: ""
	I1216 21:02:13.478869   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.478877   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:13.478883   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:13.478947   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:13.520013   60933 cri.go:89] found id: ""
	I1216 21:02:13.520037   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.520044   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:13.520050   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:13.520124   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:13.556973   60933 cri.go:89] found id: ""
	I1216 21:02:13.557001   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.557013   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:13.557023   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:13.557039   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:13.613499   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:13.613537   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:13.628689   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:13.628724   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:13.706556   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:13.706576   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:13.706589   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:13.786379   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:13.786419   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:16.333578   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:16.347948   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:16.348020   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:16.386928   60933 cri.go:89] found id: ""
	I1216 21:02:16.386955   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.386963   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:16.386969   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:16.387033   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:16.425192   60933 cri.go:89] found id: ""
	I1216 21:02:16.425253   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.425265   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:16.425273   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:16.425355   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:16.465522   60933 cri.go:89] found id: ""
	I1216 21:02:16.465554   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.465565   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:16.465573   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:16.465638   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:16.504567   60933 cri.go:89] found id: ""
	I1216 21:02:16.504605   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.504616   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:16.504624   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:16.504694   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:16.541823   60933 cri.go:89] found id: ""
	I1216 21:02:16.541852   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.541864   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:16.541872   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:16.541942   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:16.580898   60933 cri.go:89] found id: ""
	I1216 21:02:16.580927   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.580938   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:16.580946   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:16.581003   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:16.626006   60933 cri.go:89] found id: ""
	I1216 21:02:16.626036   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.626046   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:16.626053   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:16.626109   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:16.662686   60933 cri.go:89] found id: ""
	I1216 21:02:16.662712   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.662719   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:16.662728   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:16.662740   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:16.717939   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:16.717978   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:16.733431   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:16.733466   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:16.807379   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:16.807409   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:16.807421   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:16.896455   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:16.896492   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:19.442959   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:19.458684   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:19.458749   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:19.499907   60933 cri.go:89] found id: ""
	I1216 21:02:19.499938   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.499947   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:19.499954   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:19.500002   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:19.538010   60933 cri.go:89] found id: ""
	I1216 21:02:19.538035   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.538043   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:19.538049   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:19.538148   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:19.577097   60933 cri.go:89] found id: ""
	I1216 21:02:19.577131   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.577139   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:19.577145   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:19.577196   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:19.617288   60933 cri.go:89] found id: ""
	I1216 21:02:19.617316   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.617326   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:19.617332   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:19.617392   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:19.658066   60933 cri.go:89] found id: ""
	I1216 21:02:19.658090   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.658097   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:19.658103   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:19.658153   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:19.696077   60933 cri.go:89] found id: ""
	I1216 21:02:19.696108   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.696121   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:19.696131   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:19.696189   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:19.737657   60933 cri.go:89] found id: ""
	I1216 21:02:19.737692   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.737704   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:19.737712   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:19.737776   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:19.778699   60933 cri.go:89] found id: ""
	I1216 21:02:19.778729   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.778738   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:19.778746   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:19.778757   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:19.841941   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:19.841979   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:19.857752   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:19.857788   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:19.935980   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:19.936004   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:19.936020   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:20.019999   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:20.020046   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:22.566398   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:22.580376   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:22.580472   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:22.620240   60933 cri.go:89] found id: ""
	I1216 21:02:22.620273   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.620284   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:22.620292   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:22.620355   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:22.656413   60933 cri.go:89] found id: ""
	I1216 21:02:22.656444   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.656455   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:22.656463   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:22.656531   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:22.690956   60933 cri.go:89] found id: ""
	I1216 21:02:22.690978   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.690986   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:22.690992   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:22.691040   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:22.734851   60933 cri.go:89] found id: ""
	I1216 21:02:22.734885   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.734895   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:22.734903   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:22.734969   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:22.774416   60933 cri.go:89] found id: ""
	I1216 21:02:22.774450   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.774461   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:22.774467   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:22.774535   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:22.811162   60933 cri.go:89] found id: ""
	I1216 21:02:22.811192   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.811204   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:22.811212   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:22.811296   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:22.851955   60933 cri.go:89] found id: ""
	I1216 21:02:22.851980   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.851987   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:22.851993   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:22.852051   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:22.888699   60933 cri.go:89] found id: ""
	I1216 21:02:22.888725   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.888736   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:22.888747   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:22.888769   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:22.944065   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:22.944100   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:22.960842   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:22.960872   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:23.036229   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:23.036251   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:23.036263   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:23.122493   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:23.122535   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:25.667995   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:25.682152   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:25.682222   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:25.719092   60933 cri.go:89] found id: ""
	I1216 21:02:25.719120   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.719130   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:25.719135   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:25.719190   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:25.757668   60933 cri.go:89] found id: ""
	I1216 21:02:25.757702   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.757712   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:25.757720   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:25.757791   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:25.809743   60933 cri.go:89] found id: ""
	I1216 21:02:25.809776   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.809787   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:25.809795   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:25.809857   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:25.849181   60933 cri.go:89] found id: ""
	I1216 21:02:25.849211   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.849222   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:25.849230   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:25.849295   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:25.891032   60933 cri.go:89] found id: ""
	I1216 21:02:25.891079   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.891091   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:25.891098   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:25.891169   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:25.930549   60933 cri.go:89] found id: ""
	I1216 21:02:25.930575   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.930583   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:25.930589   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:25.930639   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:25.971709   60933 cri.go:89] found id: ""
	I1216 21:02:25.971736   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.971744   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:25.971749   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:25.971797   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:26.007728   60933 cri.go:89] found id: ""
	I1216 21:02:26.007760   60933 logs.go:282] 0 containers: []
	W1216 21:02:26.007769   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:26.007778   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:26.007791   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:26.059710   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:26.059752   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:26.074596   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:26.074627   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:26.145892   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:26.145913   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:26.145924   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:26.225961   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:26.226000   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:28.772974   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:28.787001   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:28.787078   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:28.828176   60933 cri.go:89] found id: ""
	I1216 21:02:28.828206   60933 logs.go:282] 0 containers: []
	W1216 21:02:28.828214   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:28.828223   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:28.828292   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:28.872750   60933 cri.go:89] found id: ""
	I1216 21:02:28.872781   60933 logs.go:282] 0 containers: []
	W1216 21:02:28.872792   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:28.872798   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:28.872859   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:28.914844   60933 cri.go:89] found id: ""
	I1216 21:02:28.914871   60933 logs.go:282] 0 containers: []
	W1216 21:02:28.914879   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:28.914884   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:28.914934   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:28.953541   60933 cri.go:89] found id: ""
	I1216 21:02:28.953569   60933 logs.go:282] 0 containers: []
	W1216 21:02:28.953579   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:28.953587   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:28.953647   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:28.992768   60933 cri.go:89] found id: ""
	I1216 21:02:28.992797   60933 logs.go:282] 0 containers: []
	W1216 21:02:28.992808   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:28.992816   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:28.992882   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:29.030069   60933 cri.go:89] found id: ""
	I1216 21:02:29.030102   60933 logs.go:282] 0 containers: []
	W1216 21:02:29.030113   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:29.030121   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:29.030187   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:29.068629   60933 cri.go:89] found id: ""
	I1216 21:02:29.068658   60933 logs.go:282] 0 containers: []
	W1216 21:02:29.068666   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:29.068677   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:29.068726   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:29.103664   60933 cri.go:89] found id: ""
	I1216 21:02:29.103697   60933 logs.go:282] 0 containers: []
	W1216 21:02:29.103708   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:29.103719   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:29.103732   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:29.151225   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:29.151276   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:29.209448   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:29.209499   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:29.225232   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:29.225257   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:29.309812   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:29.309832   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:29.309846   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:31.896263   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:31.912378   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:31.912455   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:31.950479   60933 cri.go:89] found id: ""
	I1216 21:02:31.950508   60933 logs.go:282] 0 containers: []
	W1216 21:02:31.950527   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:31.950535   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:31.950600   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:31.990479   60933 cri.go:89] found id: ""
	I1216 21:02:31.990504   60933 logs.go:282] 0 containers: []
	W1216 21:02:31.990515   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:31.990533   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:31.990599   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:32.032808   60933 cri.go:89] found id: ""
	I1216 21:02:32.032834   60933 logs.go:282] 0 containers: []
	W1216 21:02:32.032843   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:32.032853   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:32.032913   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:32.069719   60933 cri.go:89] found id: ""
	I1216 21:02:32.069748   60933 logs.go:282] 0 containers: []
	W1216 21:02:32.069759   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:32.069772   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:32.069830   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:32.106652   60933 cri.go:89] found id: ""
	I1216 21:02:32.106685   60933 logs.go:282] 0 containers: []
	W1216 21:02:32.106694   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:32.106701   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:32.106767   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:32.145921   60933 cri.go:89] found id: ""
	I1216 21:02:32.145949   60933 logs.go:282] 0 containers: []
	W1216 21:02:32.145957   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:32.145963   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:32.146014   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:32.206313   60933 cri.go:89] found id: ""
	I1216 21:02:32.206342   60933 logs.go:282] 0 containers: []
	W1216 21:02:32.206351   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:32.206356   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:32.206410   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:32.262757   60933 cri.go:89] found id: ""
	I1216 21:02:32.262794   60933 logs.go:282] 0 containers: []
	W1216 21:02:32.262806   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:32.262818   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:32.262832   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:32.320221   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:32.320251   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:32.375395   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:32.375437   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:32.391103   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:32.391137   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:32.474709   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:32.474741   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:32.474757   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:35.058809   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:35.073074   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:35.073157   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:35.115280   60933 cri.go:89] found id: ""
	I1216 21:02:35.115305   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.115312   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:35.115318   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:35.115378   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:35.151561   60933 cri.go:89] found id: ""
	I1216 21:02:35.151589   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.151597   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:35.151603   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:35.151654   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:35.192061   60933 cri.go:89] found id: ""
	I1216 21:02:35.192088   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.192095   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:35.192111   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:35.192161   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:35.231493   60933 cri.go:89] found id: ""
	I1216 21:02:35.231523   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.231531   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:35.231538   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:35.231586   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:35.271236   60933 cri.go:89] found id: ""
	I1216 21:02:35.271291   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.271300   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:35.271306   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:35.271368   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:35.309950   60933 cri.go:89] found id: ""
	I1216 21:02:35.309980   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.309991   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:35.309999   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:35.310062   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:35.347762   60933 cri.go:89] found id: ""
	I1216 21:02:35.347790   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.347797   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:35.347803   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:35.347851   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:35.390732   60933 cri.go:89] found id: ""
	I1216 21:02:35.390757   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.390765   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:35.390774   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:35.390785   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:35.447068   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:35.447112   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:35.462873   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:35.462904   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:35.541120   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:35.541145   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:35.541162   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:35.627073   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:35.627120   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:38.170994   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:38.194371   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:38.194434   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:38.248023   60933 cri.go:89] found id: ""
	I1216 21:02:38.248050   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.248061   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:38.248069   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:38.248147   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:38.300143   60933 cri.go:89] found id: ""
	I1216 21:02:38.300175   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.300185   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:38.300193   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:38.300253   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:38.345273   60933 cri.go:89] found id: ""
	I1216 21:02:38.345300   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.345308   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:38.345314   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:38.345389   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:38.383032   60933 cri.go:89] found id: ""
	I1216 21:02:38.383066   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.383075   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:38.383081   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:38.383135   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:38.426042   60933 cri.go:89] found id: ""
	I1216 21:02:38.426074   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.426086   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:38.426094   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:38.426159   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:38.467596   60933 cri.go:89] found id: ""
	I1216 21:02:38.467625   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.467634   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:38.467640   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:38.467692   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:38.509340   60933 cri.go:89] found id: ""
	I1216 21:02:38.509380   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.509391   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:38.509399   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:38.509470   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:38.549306   60933 cri.go:89] found id: ""
	I1216 21:02:38.549337   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.549354   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:38.549365   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:38.549381   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:38.564091   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:38.564131   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:38.639173   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:38.639201   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:38.639219   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:38.716320   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:38.716376   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:38.756779   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:38.756815   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:41.310680   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:41.327606   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:41.327684   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:41.371622   60933 cri.go:89] found id: ""
	I1216 21:02:41.371657   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.371670   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:41.371679   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:41.371739   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:41.408149   60933 cri.go:89] found id: ""
	I1216 21:02:41.408187   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.408198   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:41.408203   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:41.408252   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:41.448445   60933 cri.go:89] found id: ""
	I1216 21:02:41.448471   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.448478   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:41.448484   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:41.448533   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:41.489957   60933 cri.go:89] found id: ""
	I1216 21:02:41.489989   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.490000   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:41.490007   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:41.490069   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:41.532891   60933 cri.go:89] found id: ""
	I1216 21:02:41.532918   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.532930   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:41.532937   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:41.532992   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:41.570315   60933 cri.go:89] found id: ""
	I1216 21:02:41.570342   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.570351   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:41.570357   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:41.570455   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:41.606833   60933 cri.go:89] found id: ""
	I1216 21:02:41.606867   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.606880   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:41.606890   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:41.606959   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:41.643862   60933 cri.go:89] found id: ""
	I1216 21:02:41.643886   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.643894   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:41.643902   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:41.643914   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:41.657621   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:41.657654   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:41.732256   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:41.732281   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:41.732295   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:41.822045   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:41.822081   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:41.863900   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:41.863933   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:44.425154   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:44.440148   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:44.440223   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:44.478216   60933 cri.go:89] found id: ""
	I1216 21:02:44.478247   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.478258   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:44.478266   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:44.478329   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:44.517054   60933 cri.go:89] found id: ""
	I1216 21:02:44.517078   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.517084   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:44.517090   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:44.517137   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:44.554683   60933 cri.go:89] found id: ""
	I1216 21:02:44.554778   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.554801   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:44.554845   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:44.554927   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:44.600748   60933 cri.go:89] found id: ""
	I1216 21:02:44.600788   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.600800   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:44.600809   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:44.600863   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:44.637564   60933 cri.go:89] found id: ""
	I1216 21:02:44.637592   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.637600   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:44.637606   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:44.637656   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:44.676619   60933 cri.go:89] found id: ""
	I1216 21:02:44.676662   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.676674   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:44.676683   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:44.676755   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:44.715920   60933 cri.go:89] found id: ""
	I1216 21:02:44.715956   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.715964   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:44.715970   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:44.716027   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:44.755134   60933 cri.go:89] found id: ""
	I1216 21:02:44.755167   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.755179   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:44.755191   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:44.755202   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:44.796135   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:44.796164   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:44.850550   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:44.850593   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:44.865278   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:44.865305   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:44.942987   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:44.943013   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:44.943026   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:47.529850   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:47.546292   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:47.546369   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:47.589597   60933 cri.go:89] found id: ""
	I1216 21:02:47.589627   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.589640   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:47.589648   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:47.589713   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:47.630998   60933 cri.go:89] found id: ""
	I1216 21:02:47.631030   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.631043   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:47.631051   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:47.631118   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:47.670118   60933 cri.go:89] found id: ""
	I1216 21:02:47.670150   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.670162   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:47.670169   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:47.670233   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:47.714516   60933 cri.go:89] found id: ""
	I1216 21:02:47.714549   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.714560   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:47.714568   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:47.714631   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:47.752042   60933 cri.go:89] found id: ""
	I1216 21:02:47.752074   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.752086   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:47.752093   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:47.752158   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:47.793612   60933 cri.go:89] found id: ""
	I1216 21:02:47.793645   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.793656   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:47.793664   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:47.793734   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:47.833489   60933 cri.go:89] found id: ""
	I1216 21:02:47.833518   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.833529   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:47.833541   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:47.833602   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:47.869744   60933 cri.go:89] found id: ""
	I1216 21:02:47.869772   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.869783   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:47.869793   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:47.869809   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:47.910640   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:47.910674   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:47.965747   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:47.965781   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:47.979760   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:47.979786   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:48.056887   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:48.056917   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:48.056933   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:50.641224   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:50.657267   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:50.657346   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:50.696890   60933 cri.go:89] found id: ""
	I1216 21:02:50.696916   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.696924   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:50.696930   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:50.696993   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:50.734485   60933 cri.go:89] found id: ""
	I1216 21:02:50.734514   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.734524   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:50.734533   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:50.734598   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:50.776241   60933 cri.go:89] found id: ""
	I1216 21:02:50.776268   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.776277   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:50.776283   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:50.776358   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:50.816449   60933 cri.go:89] found id: ""
	I1216 21:02:50.816482   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.816493   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:50.816501   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:50.816561   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:50.857458   60933 cri.go:89] found id: ""
	I1216 21:02:50.857481   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.857488   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:50.857494   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:50.857556   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:50.895367   60933 cri.go:89] found id: ""
	I1216 21:02:50.895391   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.895398   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:50.895404   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:50.895466   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:50.934101   60933 cri.go:89] found id: ""
	I1216 21:02:50.934128   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.934138   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:50.934152   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:50.934212   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:50.978625   60933 cri.go:89] found id: ""
	I1216 21:02:50.978654   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.978665   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:50.978675   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:50.978688   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:51.061867   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:51.061908   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:51.101188   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:51.101228   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:51.157426   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:51.157470   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:51.172835   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:51.172882   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:51.247678   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:53.748503   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:53.763357   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:53.763425   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:53.807963   60933 cri.go:89] found id: ""
	I1216 21:02:53.807990   60933 logs.go:282] 0 containers: []
	W1216 21:02:53.807999   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:53.808005   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:53.808063   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:53.846840   60933 cri.go:89] found id: ""
	I1216 21:02:53.846867   60933 logs.go:282] 0 containers: []
	W1216 21:02:53.846876   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:53.846881   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:53.846929   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:53.885099   60933 cri.go:89] found id: ""
	I1216 21:02:53.885131   60933 logs.go:282] 0 containers: []
	W1216 21:02:53.885146   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:53.885156   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:53.885226   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:53.923859   60933 cri.go:89] found id: ""
	I1216 21:02:53.923890   60933 logs.go:282] 0 containers: []
	W1216 21:02:53.923901   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:53.923908   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:53.923972   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:53.964150   60933 cri.go:89] found id: ""
	I1216 21:02:53.964176   60933 logs.go:282] 0 containers: []
	W1216 21:02:53.964186   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:53.964201   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:53.964265   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:54.004676   60933 cri.go:89] found id: ""
	I1216 21:02:54.004707   60933 logs.go:282] 0 containers: []
	W1216 21:02:54.004718   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:54.004725   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:54.004789   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:54.042560   60933 cri.go:89] found id: ""
	I1216 21:02:54.042585   60933 logs.go:282] 0 containers: []
	W1216 21:02:54.042595   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:54.042603   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:54.042666   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:54.081002   60933 cri.go:89] found id: ""
	I1216 21:02:54.081030   60933 logs.go:282] 0 containers: []
	W1216 21:02:54.081038   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:54.081046   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:54.081058   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:54.132825   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:54.132865   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:54.147793   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:54.147821   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:54.226668   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:54.226692   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:54.226704   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:54.307792   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:54.307832   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:56.852207   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:56.866404   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:56.866469   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:56.911786   60933 cri.go:89] found id: ""
	I1216 21:02:56.911811   60933 logs.go:282] 0 containers: []
	W1216 21:02:56.911820   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:56.911829   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:56.911886   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:56.953491   60933 cri.go:89] found id: ""
	I1216 21:02:56.953520   60933 logs.go:282] 0 containers: []
	W1216 21:02:56.953535   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:56.953543   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:56.953610   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:56.991569   60933 cri.go:89] found id: ""
	I1216 21:02:56.991605   60933 logs.go:282] 0 containers: []
	W1216 21:02:56.991616   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:56.991622   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:56.991685   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:57.026808   60933 cri.go:89] found id: ""
	I1216 21:02:57.026837   60933 logs.go:282] 0 containers: []
	W1216 21:02:57.026845   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:57.026851   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:57.026913   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:57.065539   60933 cri.go:89] found id: ""
	I1216 21:02:57.065569   60933 logs.go:282] 0 containers: []
	W1216 21:02:57.065577   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:57.065583   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:57.065642   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:57.103911   60933 cri.go:89] found id: ""
	I1216 21:02:57.103942   60933 logs.go:282] 0 containers: []
	W1216 21:02:57.103952   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:57.103960   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:57.104015   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:57.141177   60933 cri.go:89] found id: ""
	I1216 21:02:57.141200   60933 logs.go:282] 0 containers: []
	W1216 21:02:57.141207   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:57.141213   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:57.141262   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:57.178532   60933 cri.go:89] found id: ""
	I1216 21:02:57.178590   60933 logs.go:282] 0 containers: []
	W1216 21:02:57.178604   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:57.178614   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:57.178629   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:57.234811   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:57.234846   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:57.251540   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:57.251569   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:57.329029   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:57.329061   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:57.329077   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:57.412624   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:57.412665   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:59.960422   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:59.974889   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:59.974966   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:00.012641   60933 cri.go:89] found id: ""
	I1216 21:03:00.012669   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.012676   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:00.012682   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:00.012730   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:00.053730   60933 cri.go:89] found id: ""
	I1216 21:03:00.053766   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.053778   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:00.053785   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:00.053847   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:00.091213   60933 cri.go:89] found id: ""
	I1216 21:03:00.091261   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.091274   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:00.091283   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:00.091357   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:00.131357   60933 cri.go:89] found id: ""
	I1216 21:03:00.131382   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.131390   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:00.131396   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:00.131460   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:00.168331   60933 cri.go:89] found id: ""
	I1216 21:03:00.168362   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.168373   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:00.168380   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:00.168446   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:00.208326   60933 cri.go:89] found id: ""
	I1216 21:03:00.208360   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.208369   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:00.208377   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:00.208440   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:00.245775   60933 cri.go:89] found id: ""
	I1216 21:03:00.245800   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.245808   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:00.245814   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:00.245863   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:00.283062   60933 cri.go:89] found id: ""
	I1216 21:03:00.283091   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.283100   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:00.283108   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:00.283119   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:00.358767   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:00.358787   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:00.358799   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:00.443422   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:00.443460   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:00.491511   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:00.491551   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:00.566131   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:00.566172   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:03.080319   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:03.094733   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:03.094818   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:03.132388   60933 cri.go:89] found id: ""
	I1216 21:03:03.132419   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.132428   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:03.132433   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:03.132488   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:03.172345   60933 cri.go:89] found id: ""
	I1216 21:03:03.172374   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.172386   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:03.172393   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:03.172474   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:03.210444   60933 cri.go:89] found id: ""
	I1216 21:03:03.210479   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.210488   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:03.210494   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:03.210544   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:03.248605   60933 cri.go:89] found id: ""
	I1216 21:03:03.248644   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.248656   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:03.248664   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:03.248723   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:03.286822   60933 cri.go:89] found id: ""
	I1216 21:03:03.286854   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.286862   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:03.286868   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:03.286921   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:03.329304   60933 cri.go:89] found id: ""
	I1216 21:03:03.329333   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.329344   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:03.329352   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:03.329417   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:03.367337   60933 cri.go:89] found id: ""
	I1216 21:03:03.367361   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.367368   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:03.367373   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:03.367420   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:03.409799   60933 cri.go:89] found id: ""
	I1216 21:03:03.409821   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.409829   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:03.409838   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:03.409850   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:03.466941   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:03.466976   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:03.483090   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:03.483117   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:03.566835   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:03.566860   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:03.566878   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:03.649747   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:03.649793   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:06.193505   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:06.207797   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:06.207878   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:06.245401   60933 cri.go:89] found id: ""
	I1216 21:03:06.245437   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.245448   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:06.245456   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:06.245521   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:06.301205   60933 cri.go:89] found id: ""
	I1216 21:03:06.301239   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.301250   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:06.301257   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:06.301326   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:06.340325   60933 cri.go:89] found id: ""
	I1216 21:03:06.340352   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.340362   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:06.340369   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:06.340429   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:06.378321   60933 cri.go:89] found id: ""
	I1216 21:03:06.378351   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.378359   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:06.378365   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:06.378422   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:06.416354   60933 cri.go:89] found id: ""
	I1216 21:03:06.416390   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.416401   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:06.416409   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:06.416473   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:06.459926   60933 cri.go:89] found id: ""
	I1216 21:03:06.459955   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.459967   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:06.459975   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:06.460063   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:06.501818   60933 cri.go:89] found id: ""
	I1216 21:03:06.501849   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.501860   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:06.501866   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:06.501926   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:06.537552   60933 cri.go:89] found id: ""
	I1216 21:03:06.537583   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.537598   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:06.537607   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:06.537621   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:06.592170   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:06.592212   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:06.607148   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:06.607183   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:06.676114   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:06.676140   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:06.676151   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:06.756009   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:06.756052   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:09.298166   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:09.313104   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:09.313189   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:09.356598   60933 cri.go:89] found id: ""
	I1216 21:03:09.356625   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.356640   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:09.356649   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:09.356715   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:09.395406   60933 cri.go:89] found id: ""
	I1216 21:03:09.395439   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.395449   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:09.395456   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:09.395521   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:09.440401   60933 cri.go:89] found id: ""
	I1216 21:03:09.440423   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.440430   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:09.440435   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:09.440504   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:09.478798   60933 cri.go:89] found id: ""
	I1216 21:03:09.478828   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.478843   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:09.478853   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:09.478921   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:09.515542   60933 cri.go:89] found id: ""
	I1216 21:03:09.515575   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.515587   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:09.515596   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:09.515654   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:09.554150   60933 cri.go:89] found id: ""
	I1216 21:03:09.554183   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.554194   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:09.554205   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:09.554279   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:09.591699   60933 cri.go:89] found id: ""
	I1216 21:03:09.591730   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.591740   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:09.591747   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:09.591811   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:09.629938   60933 cri.go:89] found id: ""
	I1216 21:03:09.629970   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.629980   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:09.629991   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:09.630008   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:09.711255   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:09.711284   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:09.711300   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:09.790202   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:09.790243   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:09.839567   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:09.839597   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:09.893010   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:09.893050   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:12.409934   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:12.423715   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:12.423789   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:12.461995   60933 cri.go:89] found id: ""
	I1216 21:03:12.462038   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.462046   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:12.462052   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:12.462101   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:12.501738   60933 cri.go:89] found id: ""
	I1216 21:03:12.501769   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.501779   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:12.501785   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:12.501833   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:12.541758   60933 cri.go:89] found id: ""
	I1216 21:03:12.541785   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.541795   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:12.541802   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:12.541850   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:12.579173   60933 cri.go:89] found id: ""
	I1216 21:03:12.579199   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.579206   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:12.579212   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:12.579302   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:12.624382   60933 cri.go:89] found id: ""
	I1216 21:03:12.624407   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.624418   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:12.624426   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:12.624488   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:12.665139   60933 cri.go:89] found id: ""
	I1216 21:03:12.665178   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.665190   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:12.665200   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:12.665274   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:12.711586   60933 cri.go:89] found id: ""
	I1216 21:03:12.711611   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.711619   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:12.711627   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:12.711678   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:12.761566   60933 cri.go:89] found id: ""
	I1216 21:03:12.761600   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.761612   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:12.761624   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:12.761640   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:12.824282   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:12.824315   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:12.839335   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:12.839371   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:12.918317   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:12.918341   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:12.918357   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:13.000375   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:13.000410   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:15.542372   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:15.556877   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:15.556960   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:15.599345   60933 cri.go:89] found id: ""
	I1216 21:03:15.599378   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.599389   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:15.599414   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:15.599479   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:15.642072   60933 cri.go:89] found id: ""
	I1216 21:03:15.642106   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.642116   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:15.642124   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:15.642189   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:15.679989   60933 cri.go:89] found id: ""
	I1216 21:03:15.680025   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.680036   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:15.680044   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:15.680103   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:15.718343   60933 cri.go:89] found id: ""
	I1216 21:03:15.718371   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.718378   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:15.718384   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:15.718433   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:15.759937   60933 cri.go:89] found id: ""
	I1216 21:03:15.759971   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.759981   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:15.759988   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:15.760081   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:15.801434   60933 cri.go:89] found id: ""
	I1216 21:03:15.801463   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.801471   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:15.801477   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:15.801540   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:15.841855   60933 cri.go:89] found id: ""
	I1216 21:03:15.841879   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.841886   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:15.841892   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:15.841962   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:15.883951   60933 cri.go:89] found id: ""
	I1216 21:03:15.883974   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.883982   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:15.883990   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:15.884004   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:15.960868   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:15.960902   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:16.005700   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:16.005730   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:16.061128   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:16.061165   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:16.075601   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:16.075630   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:16.147810   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:18.648677   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:18.663298   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:18.663367   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:18.713281   60933 cri.go:89] found id: ""
	I1216 21:03:18.713313   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.713324   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:18.713332   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:18.713396   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:18.764861   60933 cri.go:89] found id: ""
	I1216 21:03:18.764892   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.764905   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:18.764912   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:18.764978   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:18.816140   60933 cri.go:89] found id: ""
	I1216 21:03:18.816170   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.816180   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:18.816188   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:18.816251   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:18.852118   60933 cri.go:89] found id: ""
	I1216 21:03:18.852151   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.852163   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:18.852171   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:18.852235   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:18.887996   60933 cri.go:89] found id: ""
	I1216 21:03:18.888018   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.888025   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:18.888031   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:18.888089   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:18.925415   60933 cri.go:89] found id: ""
	I1216 21:03:18.925437   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.925445   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:18.925451   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:18.925498   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:18.964853   60933 cri.go:89] found id: ""
	I1216 21:03:18.964884   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.964892   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:18.964897   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:18.964964   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:19.000822   60933 cri.go:89] found id: ""
	I1216 21:03:19.000848   60933 logs.go:282] 0 containers: []
	W1216 21:03:19.000856   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:19.000865   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:19.000879   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:19.051571   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:19.051612   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:19.066737   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:19.066767   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:19.143120   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:19.143144   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:19.143156   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:19.229811   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:19.229850   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:21.776440   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:21.792869   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:21.792951   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:21.831100   60933 cri.go:89] found id: ""
	I1216 21:03:21.831127   60933 logs.go:282] 0 containers: []
	W1216 21:03:21.831134   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:21.831140   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:21.831196   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:21.869124   60933 cri.go:89] found id: ""
	I1216 21:03:21.869147   60933 logs.go:282] 0 containers: []
	W1216 21:03:21.869155   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:21.869160   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:21.869215   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:21.909891   60933 cri.go:89] found id: ""
	I1216 21:03:21.909926   60933 logs.go:282] 0 containers: []
	W1216 21:03:21.909938   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:21.909946   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:21.910032   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:21.949140   60933 cri.go:89] found id: ""
	I1216 21:03:21.949169   60933 logs.go:282] 0 containers: []
	W1216 21:03:21.949179   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:21.949186   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:21.949245   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:21.987741   60933 cri.go:89] found id: ""
	I1216 21:03:21.987771   60933 logs.go:282] 0 containers: []
	W1216 21:03:21.987780   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:21.987785   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:21.987839   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:22.025565   60933 cri.go:89] found id: ""
	I1216 21:03:22.025593   60933 logs.go:282] 0 containers: []
	W1216 21:03:22.025601   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:22.025607   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:22.025659   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:22.062076   60933 cri.go:89] found id: ""
	I1216 21:03:22.062110   60933 logs.go:282] 0 containers: []
	W1216 21:03:22.062120   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:22.062127   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:22.062198   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:22.102037   60933 cri.go:89] found id: ""
	I1216 21:03:22.102065   60933 logs.go:282] 0 containers: []
	W1216 21:03:22.102093   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:22.102105   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:22.102122   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:22.159185   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:22.159219   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:22.175139   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:22.175168   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:22.255769   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:22.255801   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:22.255817   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:22.339633   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:22.339681   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:24.883865   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:24.898198   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:24.898287   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:24.939472   60933 cri.go:89] found id: ""
	I1216 21:03:24.939500   60933 logs.go:282] 0 containers: []
	W1216 21:03:24.939511   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:24.939518   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:24.939583   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:24.981798   60933 cri.go:89] found id: ""
	I1216 21:03:24.981822   60933 logs.go:282] 0 containers: []
	W1216 21:03:24.981829   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:24.981834   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:24.981889   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:25.021332   60933 cri.go:89] found id: ""
	I1216 21:03:25.021366   60933 logs.go:282] 0 containers: []
	W1216 21:03:25.021373   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:25.021379   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:25.021431   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:25.057811   60933 cri.go:89] found id: ""
	I1216 21:03:25.057836   60933 logs.go:282] 0 containers: []
	W1216 21:03:25.057843   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:25.057848   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:25.057907   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:25.093852   60933 cri.go:89] found id: ""
	I1216 21:03:25.093881   60933 logs.go:282] 0 containers: []
	W1216 21:03:25.093890   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:25.093895   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:25.093945   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:25.132779   60933 cri.go:89] found id: ""
	I1216 21:03:25.132813   60933 logs.go:282] 0 containers: []
	W1216 21:03:25.132825   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:25.132834   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:25.132912   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:25.173942   60933 cri.go:89] found id: ""
	I1216 21:03:25.173967   60933 logs.go:282] 0 containers: []
	W1216 21:03:25.173974   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:25.173990   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:25.174048   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:25.213105   60933 cri.go:89] found id: ""
	I1216 21:03:25.213127   60933 logs.go:282] 0 containers: []
	W1216 21:03:25.213135   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:25.213144   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:25.213155   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:25.267517   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:25.267557   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:25.284144   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:25.284177   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:25.362901   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:25.362931   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:25.362947   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:25.450193   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:25.450227   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:27.995716   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:28.012044   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:28.012138   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:28.050404   60933 cri.go:89] found id: ""
	I1216 21:03:28.050432   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.050441   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:28.050446   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:28.050492   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:28.087830   60933 cri.go:89] found id: ""
	I1216 21:03:28.087855   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.087862   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:28.087885   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:28.087933   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:28.125122   60933 cri.go:89] found id: ""
	I1216 21:03:28.125147   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.125154   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:28.125160   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:28.125233   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:28.160619   60933 cri.go:89] found id: ""
	I1216 21:03:28.160646   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.160655   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:28.160661   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:28.160726   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:28.198951   60933 cri.go:89] found id: ""
	I1216 21:03:28.198977   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.198986   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:28.198993   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:28.199059   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:28.236596   60933 cri.go:89] found id: ""
	I1216 21:03:28.236621   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.236629   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:28.236635   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:28.236707   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:28.273955   60933 cri.go:89] found id: ""
	I1216 21:03:28.273979   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.273986   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:28.273992   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:28.274061   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:28.311908   60933 cri.go:89] found id: ""
	I1216 21:03:28.311943   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.311954   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:28.311965   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:28.311979   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:28.363870   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:28.363910   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:28.379919   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:28.379945   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:28.459998   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:28.460019   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:28.460030   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:28.543229   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:28.543306   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:31.086525   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:31.100833   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:31.100950   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:31.141356   60933 cri.go:89] found id: ""
	I1216 21:03:31.141385   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.141396   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:31.141403   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:31.141465   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:31.176609   60933 cri.go:89] found id: ""
	I1216 21:03:31.176641   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.176650   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:31.176657   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:31.176721   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:31.213959   60933 cri.go:89] found id: ""
	I1216 21:03:31.213984   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.213991   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:31.213997   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:31.214058   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:31.255183   60933 cri.go:89] found id: ""
	I1216 21:03:31.255208   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.255215   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:31.255220   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:31.255297   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:31.293475   60933 cri.go:89] found id: ""
	I1216 21:03:31.293501   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.293508   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:31.293514   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:31.293561   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:31.332010   60933 cri.go:89] found id: ""
	I1216 21:03:31.332041   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.332052   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:31.332061   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:31.332119   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:31.370301   60933 cri.go:89] found id: ""
	I1216 21:03:31.370331   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.370342   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:31.370349   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:31.370414   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:31.419526   60933 cri.go:89] found id: ""
	I1216 21:03:31.419553   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.419561   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:31.419570   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:31.419583   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:31.480125   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:31.480160   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:31.495464   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:31.495497   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:31.570747   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:31.570773   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:31.570788   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:31.651521   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:31.651564   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:34.200969   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:34.216519   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:34.216596   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:34.254185   60933 cri.go:89] found id: ""
	I1216 21:03:34.254218   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.254227   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:34.254242   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:34.254312   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:34.293194   60933 cri.go:89] found id: ""
	I1216 21:03:34.293225   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.293236   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:34.293242   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:34.293297   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:34.335002   60933 cri.go:89] found id: ""
	I1216 21:03:34.335030   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.335042   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:34.335050   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:34.335112   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:34.370854   60933 cri.go:89] found id: ""
	I1216 21:03:34.370880   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.370887   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:34.370893   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:34.370938   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:34.409155   60933 cri.go:89] found id: ""
	I1216 21:03:34.409181   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.409189   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:34.409195   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:34.409256   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:34.448555   60933 cri.go:89] found id: ""
	I1216 21:03:34.448583   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.448594   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:34.448601   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:34.448663   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:34.486800   60933 cri.go:89] found id: ""
	I1216 21:03:34.486829   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.486842   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:34.486851   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:34.486919   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:34.530274   60933 cri.go:89] found id: ""
	I1216 21:03:34.530299   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.530307   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:34.530317   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:34.530335   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:34.601587   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:34.601620   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:34.601637   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:34.680215   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:34.680250   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:34.721362   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:34.721389   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:34.776652   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:34.776693   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:37.292877   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:37.306976   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:37.307060   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:37.349370   60933 cri.go:89] found id: ""
	I1216 21:03:37.349405   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.349416   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:37.349424   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:37.349486   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:37.387213   60933 cri.go:89] found id: ""
	I1216 21:03:37.387271   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.387285   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:37.387294   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:37.387361   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:37.427138   60933 cri.go:89] found id: ""
	I1216 21:03:37.427164   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.427175   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:37.427182   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:37.427269   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:37.466751   60933 cri.go:89] found id: ""
	I1216 21:03:37.466776   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.466783   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:37.466788   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:37.466846   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:37.505078   60933 cri.go:89] found id: ""
	I1216 21:03:37.505115   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.505123   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:37.505128   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:37.505189   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:37.548642   60933 cri.go:89] found id: ""
	I1216 21:03:37.548665   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.548673   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:37.548679   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:37.548738   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:37.592354   60933 cri.go:89] found id: ""
	I1216 21:03:37.592379   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.592386   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:37.592391   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:37.592441   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:37.631179   60933 cri.go:89] found id: ""
	I1216 21:03:37.631212   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.631221   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:37.631230   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:37.631261   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:37.683021   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:37.683062   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:37.698056   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:37.698087   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:37.774368   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:37.774397   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:37.774422   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:37.860470   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:37.860511   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:40.405278   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:40.420390   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:40.420473   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:40.463963   60933 cri.go:89] found id: ""
	I1216 21:03:40.463994   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.464033   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:40.464041   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:40.464107   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:40.510321   60933 cri.go:89] found id: ""
	I1216 21:03:40.510352   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.510369   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:40.510376   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:40.510441   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:40.546580   60933 cri.go:89] found id: ""
	I1216 21:03:40.546610   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.546619   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:40.546624   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:40.546686   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:40.583109   60933 cri.go:89] found id: ""
	I1216 21:03:40.583136   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.583144   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:40.583149   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:40.583202   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:40.628747   60933 cri.go:89] found id: ""
	I1216 21:03:40.628771   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.628778   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:40.628784   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:40.628845   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:40.663757   60933 cri.go:89] found id: ""
	I1216 21:03:40.663785   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.663796   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:40.663804   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:40.663867   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:40.703483   60933 cri.go:89] found id: ""
	I1216 21:03:40.703513   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.703522   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:40.703528   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:40.703592   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:40.742585   60933 cri.go:89] found id: ""
	I1216 21:03:40.742622   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.742632   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:40.742641   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:40.742653   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:40.757771   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:40.757809   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:40.837615   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:40.837642   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:40.837656   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:40.915403   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:40.915442   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:40.960762   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:40.960790   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:43.515302   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:43.530831   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:43.530906   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:43.571680   60933 cri.go:89] found id: ""
	I1216 21:03:43.571704   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.571712   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:43.571718   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:43.571779   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:43.615912   60933 cri.go:89] found id: ""
	I1216 21:03:43.615940   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.615948   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:43.615955   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:43.616013   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:43.654206   60933 cri.go:89] found id: ""
	I1216 21:03:43.654231   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.654241   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:43.654249   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:43.654309   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:43.690509   60933 cri.go:89] found id: ""
	I1216 21:03:43.690533   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.690541   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:43.690548   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:43.690595   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:43.728601   60933 cri.go:89] found id: ""
	I1216 21:03:43.728627   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.728634   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:43.728639   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:43.728685   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:43.769092   60933 cri.go:89] found id: ""
	I1216 21:03:43.769130   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.769198   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:43.769215   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:43.769292   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:43.812492   60933 cri.go:89] found id: ""
	I1216 21:03:43.812525   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.812537   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:43.812544   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:43.812613   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:43.852748   60933 cri.go:89] found id: ""
	I1216 21:03:43.852778   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.852787   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:43.852795   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:43.852807   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:43.907800   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:43.907839   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:43.922806   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:43.922833   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:44.002511   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:44.002538   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:44.002551   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:44.081760   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:44.081801   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:46.625868   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:46.640266   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:46.640341   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:46.677137   60933 cri.go:89] found id: ""
	I1216 21:03:46.677168   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.677179   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:46.677185   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:46.677241   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:46.714340   60933 cri.go:89] found id: ""
	I1216 21:03:46.714373   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.714382   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:46.714389   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:46.714449   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:46.752713   60933 cri.go:89] found id: ""
	I1216 21:03:46.752743   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.752754   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:46.752763   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:46.752827   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:46.790787   60933 cri.go:89] found id: ""
	I1216 21:03:46.790821   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.790837   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:46.790845   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:46.790902   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:46.827905   60933 cri.go:89] found id: ""
	I1216 21:03:46.827934   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.827945   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:46.827954   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:46.828023   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:46.863522   60933 cri.go:89] found id: ""
	I1216 21:03:46.863547   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.863563   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:46.863570   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:46.863634   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:46.906005   60933 cri.go:89] found id: ""
	I1216 21:03:46.906035   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.906044   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:46.906049   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:46.906103   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:46.947639   60933 cri.go:89] found id: ""
	I1216 21:03:46.947668   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.947679   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:46.947691   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:46.947706   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:47.001693   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:47.001732   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:47.023122   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:47.023166   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:47.108257   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:47.108291   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:47.108303   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:47.184768   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:47.184807   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:49.729433   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:49.743836   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:49.743903   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:49.783021   60933 cri.go:89] found id: ""
	I1216 21:03:49.783054   60933 logs.go:282] 0 containers: []
	W1216 21:03:49.783066   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:49.783074   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:49.783144   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:49.820371   60933 cri.go:89] found id: ""
	I1216 21:03:49.820399   60933 logs.go:282] 0 containers: []
	W1216 21:03:49.820409   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:49.820416   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:49.820476   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:49.857918   60933 cri.go:89] found id: ""
	I1216 21:03:49.857948   60933 logs.go:282] 0 containers: []
	W1216 21:03:49.857959   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:49.857967   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:49.858033   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:49.899517   60933 cri.go:89] found id: ""
	I1216 21:03:49.899548   60933 logs.go:282] 0 containers: []
	W1216 21:03:49.899558   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:49.899565   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:49.899632   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:49.938771   60933 cri.go:89] found id: ""
	I1216 21:03:49.938797   60933 logs.go:282] 0 containers: []
	W1216 21:03:49.938805   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:49.938810   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:49.938857   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:49.975748   60933 cri.go:89] found id: ""
	I1216 21:03:49.975781   60933 logs.go:282] 0 containers: []
	W1216 21:03:49.975792   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:49.975800   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:49.975876   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:50.013057   60933 cri.go:89] found id: ""
	I1216 21:03:50.013082   60933 logs.go:282] 0 containers: []
	W1216 21:03:50.013090   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:50.013127   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:50.013178   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:50.049106   60933 cri.go:89] found id: ""
	I1216 21:03:50.049138   60933 logs.go:282] 0 containers: []
	W1216 21:03:50.049150   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:50.049161   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:50.049176   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:50.063815   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:50.063847   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:50.137801   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:50.137826   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:50.137841   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:50.218456   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:50.218495   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:50.263347   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:50.263379   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:52.824077   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:52.838096   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:52.838185   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:52.880550   60933 cri.go:89] found id: ""
	I1216 21:03:52.880582   60933 logs.go:282] 0 containers: []
	W1216 21:03:52.880593   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:52.880600   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:52.880658   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:52.919728   60933 cri.go:89] found id: ""
	I1216 21:03:52.919751   60933 logs.go:282] 0 containers: []
	W1216 21:03:52.919759   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:52.919764   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:52.919819   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:52.957519   60933 cri.go:89] found id: ""
	I1216 21:03:52.957542   60933 logs.go:282] 0 containers: []
	W1216 21:03:52.957549   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:52.957555   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:52.957607   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:52.996631   60933 cri.go:89] found id: ""
	I1216 21:03:52.996663   60933 logs.go:282] 0 containers: []
	W1216 21:03:52.996673   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:52.996681   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:52.996745   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:53.059902   60933 cri.go:89] found id: ""
	I1216 21:03:53.060014   60933 logs.go:282] 0 containers: []
	W1216 21:03:53.060030   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:53.060039   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:53.060105   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:53.099367   60933 cri.go:89] found id: ""
	I1216 21:03:53.099395   60933 logs.go:282] 0 containers: []
	W1216 21:03:53.099406   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:53.099419   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:53.099486   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:53.140668   60933 cri.go:89] found id: ""
	I1216 21:03:53.140696   60933 logs.go:282] 0 containers: []
	W1216 21:03:53.140704   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:53.140709   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:53.140777   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:53.179182   60933 cri.go:89] found id: ""
	I1216 21:03:53.179208   60933 logs.go:282] 0 containers: []
	W1216 21:03:53.179216   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:53.179225   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:53.179236   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:53.233441   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:53.233481   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:53.247526   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:53.247569   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:53.321868   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:53.321895   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:53.321911   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:53.410904   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:53.410959   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:55.954371   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:55.968506   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:55.968570   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:56.005087   60933 cri.go:89] found id: ""
	I1216 21:03:56.005118   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.005130   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:56.005137   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:56.005205   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:56.039443   60933 cri.go:89] found id: ""
	I1216 21:03:56.039467   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.039475   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:56.039486   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:56.039537   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:56.078181   60933 cri.go:89] found id: ""
	I1216 21:03:56.078213   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.078224   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:56.078231   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:56.078289   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:56.115809   60933 cri.go:89] found id: ""
	I1216 21:03:56.115833   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.115841   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:56.115848   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:56.115901   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:56.154299   60933 cri.go:89] found id: ""
	I1216 21:03:56.154323   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.154330   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:56.154336   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:56.154395   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:56.193069   60933 cri.go:89] found id: ""
	I1216 21:03:56.193098   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.193106   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:56.193112   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:56.193161   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:56.231067   60933 cri.go:89] found id: ""
	I1216 21:03:56.231099   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.231118   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:56.231125   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:56.231191   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:56.270980   60933 cri.go:89] found id: ""
	I1216 21:03:56.271011   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.271022   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:56.271035   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:56.271050   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:56.321374   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:56.321405   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:56.336802   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:56.336847   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:56.414052   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:56.414078   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:56.414091   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:56.499118   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:56.499158   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:59.049386   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:59.063191   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:59.063300   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:59.102136   60933 cri.go:89] found id: ""
	I1216 21:03:59.102169   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.102180   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:59.102187   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:59.102255   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:59.138311   60933 cri.go:89] found id: ""
	I1216 21:03:59.138340   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.138357   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:59.138364   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:59.138431   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:59.176131   60933 cri.go:89] found id: ""
	I1216 21:03:59.176159   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.176169   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:59.176177   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:59.176259   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:59.214274   60933 cri.go:89] found id: ""
	I1216 21:03:59.214308   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.214320   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:59.214327   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:59.214397   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:59.254499   60933 cri.go:89] found id: ""
	I1216 21:03:59.254524   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.254531   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:59.254537   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:59.254602   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:59.292715   60933 cri.go:89] found id: ""
	I1216 21:03:59.292755   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.292765   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:59.292772   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:59.292836   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:59.333279   60933 cri.go:89] found id: ""
	I1216 21:03:59.333314   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.333325   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:59.333332   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:59.333404   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:59.372071   60933 cri.go:89] found id: ""
	I1216 21:03:59.372104   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.372116   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:59.372126   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:59.372143   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:59.389021   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:59.389051   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:59.503281   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:59.503304   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:59.503316   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:59.581761   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:59.581797   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:59.627604   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:59.627646   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:04:02.179425   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:04:02.195786   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:04:02.195850   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:04:02.239763   60933 cri.go:89] found id: ""
	I1216 21:04:02.239790   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.239801   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:04:02.239809   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:04:02.239873   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:04:02.278885   60933 cri.go:89] found id: ""
	I1216 21:04:02.278914   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.278926   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:04:02.278935   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:04:02.279004   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:04:02.320701   60933 cri.go:89] found id: ""
	I1216 21:04:02.320731   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.320742   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:04:02.320749   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:04:02.320811   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:04:02.357726   60933 cri.go:89] found id: ""
	I1216 21:04:02.357757   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.357767   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:04:02.357773   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:04:02.357826   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:04:02.399577   60933 cri.go:89] found id: ""
	I1216 21:04:02.399609   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.399618   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:04:02.399624   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:04:02.399687   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:04:02.445559   60933 cri.go:89] found id: ""
	I1216 21:04:02.445590   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.445600   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:04:02.445607   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:04:02.445670   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:04:02.482983   60933 cri.go:89] found id: ""
	I1216 21:04:02.483015   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.483027   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:04:02.483035   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:04:02.483116   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:04:02.523028   60933 cri.go:89] found id: ""
	I1216 21:04:02.523055   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.523063   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:04:02.523073   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:04:02.523084   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:04:02.577447   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:04:02.577487   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:04:02.594539   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:04:02.594567   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:04:02.683805   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:04:02.683832   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:04:02.683848   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:04:02.763377   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:04:02.763416   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:04:05.311029   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:04:05.328358   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:04:05.328438   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:04:05.367378   60933 cri.go:89] found id: ""
	I1216 21:04:05.367402   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.367409   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:04:05.367419   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:04:05.367468   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:04:05.406268   60933 cri.go:89] found id: ""
	I1216 21:04:05.406291   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.406301   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:04:05.406306   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:04:05.406353   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:04:05.444737   60933 cri.go:89] found id: ""
	I1216 21:04:05.444767   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.444778   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:04:05.444787   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:04:05.444836   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:04:05.484044   60933 cri.go:89] found id: ""
	I1216 21:04:05.484132   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.484153   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:04:05.484161   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:04:05.484222   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:04:05.523395   60933 cri.go:89] found id: ""
	I1216 21:04:05.523420   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.523431   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:04:05.523439   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:04:05.523501   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:04:05.566925   60933 cri.go:89] found id: ""
	I1216 21:04:05.566954   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.566967   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:04:05.566974   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:04:05.567036   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:04:05.611275   60933 cri.go:89] found id: ""
	I1216 21:04:05.611303   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.611314   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:04:05.611321   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:04:05.611396   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:04:05.650340   60933 cri.go:89] found id: ""
	I1216 21:04:05.650371   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.650379   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:04:05.650389   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:04:05.650400   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:04:05.702277   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:04:05.702321   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:04:05.718685   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:04:05.718713   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:04:05.794979   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:04:05.795005   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:04:05.795020   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:04:05.897348   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:04:05.897383   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:04:08.447268   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:04:08.462553   60933 kubeadm.go:597] duration metric: took 4m2.545161532s to restartPrimaryControlPlane
	W1216 21:04:08.462621   60933 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 21:04:08.462650   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 21:04:10.315541   60933 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.85286561s)
	I1216 21:04:10.315622   60933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:04:10.330937   60933 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 21:04:10.343702   60933 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:04:10.356498   60933 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:04:10.356526   60933 kubeadm.go:157] found existing configuration files:
	
	I1216 21:04:10.356579   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 21:04:10.367777   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:04:10.367847   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:04:10.379109   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 21:04:10.389258   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:04:10.389313   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:04:10.399959   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 21:04:10.410664   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:04:10.410734   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:04:10.423138   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 21:04:10.433922   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:04:10.433976   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:04:10.445297   60933 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 21:04:10.524236   60933 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1216 21:04:10.524344   60933 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 21:04:10.680331   60933 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 21:04:10.680489   60933 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 21:04:10.680641   60933 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1216 21:04:10.877305   60933 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 21:04:10.879375   60933 out.go:235]   - Generating certificates and keys ...
	I1216 21:04:10.879496   60933 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 21:04:10.879567   60933 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 21:04:10.879647   60933 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 21:04:10.879748   60933 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 21:04:10.879865   60933 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 21:04:10.880127   60933 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 21:04:10.881047   60933 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 21:04:10.881874   60933 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 21:04:10.882778   60933 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 21:04:10.883678   60933 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 21:04:10.884029   60933 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 21:04:10.884130   60933 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 21:04:11.034011   60933 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 21:04:11.273509   60933 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 21:04:11.477553   60933 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 21:04:11.542158   60933 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 21:04:11.565791   60933 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 21:04:11.567317   60933 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 21:04:11.567409   60933 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 21:04:11.763223   60933 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 21:04:11.766107   60933 out.go:235]   - Booting up control plane ...
	I1216 21:04:11.766257   60933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 21:04:11.766367   60933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 21:04:11.768484   60933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 21:04:11.773601   60933 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 21:04:11.780554   60933 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1216 21:04:51.781856   60933 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1216 21:04:51.782285   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:04:51.782543   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:04:56.783069   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:04:56.783323   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:05:06.784032   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:05:06.784224   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:05:26.785021   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:05:26.785309   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:06:06.787417   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:06:06.787673   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:06:06.787700   60933 kubeadm.go:310] 
	I1216 21:06:06.787779   60933 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1216 21:06:06.787849   60933 kubeadm.go:310] 		timed out waiting for the condition
	I1216 21:06:06.787864   60933 kubeadm.go:310] 
	I1216 21:06:06.787894   60933 kubeadm.go:310] 	This error is likely caused by:
	I1216 21:06:06.787944   60933 kubeadm.go:310] 		- The kubelet is not running
	I1216 21:06:06.788115   60933 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 21:06:06.788131   60933 kubeadm.go:310] 
	I1216 21:06:06.788238   60933 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 21:06:06.788270   60933 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1216 21:06:06.788328   60933 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1216 21:06:06.788346   60933 kubeadm.go:310] 
	I1216 21:06:06.788492   60933 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1216 21:06:06.788568   60933 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1216 21:06:06.788575   60933 kubeadm.go:310] 
	I1216 21:06:06.788706   60933 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1216 21:06:06.788914   60933 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1216 21:06:06.789052   60933 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1216 21:06:06.789150   60933 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1216 21:06:06.789160   60933 kubeadm.go:310] 
	I1216 21:06:06.789970   60933 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 21:06:06.790084   60933 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1216 21:06:06.790222   60933 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1216 21:06:06.790376   60933 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 21:06:06.790430   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 21:06:07.272336   60933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:06:07.288881   60933 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:06:07.303411   60933 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:06:07.303437   60933 kubeadm.go:157] found existing configuration files:
	
	I1216 21:06:07.303486   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 21:06:07.314605   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:06:07.314675   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:06:07.326523   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 21:06:07.336506   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:06:07.336587   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:06:07.347505   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 21:06:07.357743   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:06:07.357799   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:06:07.368251   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 21:06:07.378296   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:06:07.378366   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:06:07.390625   60933 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 21:06:07.461800   60933 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1216 21:06:07.461911   60933 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 21:06:07.607467   60933 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 21:06:07.607664   60933 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 21:06:07.607821   60933 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1216 21:06:07.821429   60933 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 21:06:07.823617   60933 out.go:235]   - Generating certificates and keys ...
	I1216 21:06:07.823728   60933 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 21:06:07.823826   60933 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 21:06:07.823970   60933 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 21:06:07.824066   60933 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 21:06:07.824191   60933 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 21:06:07.824281   60933 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 21:06:07.824374   60933 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 21:06:07.824452   60933 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 21:06:07.824529   60933 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 21:06:07.824634   60933 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 21:06:07.824728   60933 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 21:06:07.824826   60933 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 21:06:08.070481   60933 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 21:06:08.416182   60933 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 21:06:08.472848   60933 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 21:06:08.528700   60933 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 21:06:08.551528   60933 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 21:06:08.552215   60933 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 21:06:08.552299   60933 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 21:06:08.702187   60933 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 21:06:08.704170   60933 out.go:235]   - Booting up control plane ...
	I1216 21:06:08.704286   60933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 21:06:08.721205   60933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 21:06:08.722619   60933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 21:06:08.724289   60933 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 21:06:08.726457   60933 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1216 21:06:48.729045   60933 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1216 21:06:48.729713   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:06:48.730028   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:06:53.730648   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:06:53.730870   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:07:03.731670   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:07:03.731904   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:07:23.733276   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:07:23.733489   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:08:03.734439   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:08:03.734730   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:08:03.734768   60933 kubeadm.go:310] 
	I1216 21:08:03.734831   60933 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1216 21:08:03.734902   60933 kubeadm.go:310] 		timed out waiting for the condition
	I1216 21:08:03.734917   60933 kubeadm.go:310] 
	I1216 21:08:03.734966   60933 kubeadm.go:310] 	This error is likely caused by:
	I1216 21:08:03.735003   60933 kubeadm.go:310] 		- The kubelet is not running
	I1216 21:08:03.735094   60933 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 21:08:03.735104   60933 kubeadm.go:310] 
	I1216 21:08:03.735260   60933 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 21:08:03.735325   60933 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1216 21:08:03.735353   60933 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1216 21:08:03.735359   60933 kubeadm.go:310] 
	I1216 21:08:03.735486   60933 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1216 21:08:03.735604   60933 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1216 21:08:03.735614   60933 kubeadm.go:310] 
	I1216 21:08:03.735757   60933 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1216 21:08:03.735880   60933 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1216 21:08:03.735986   60933 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1216 21:08:03.736096   60933 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1216 21:08:03.736107   60933 kubeadm.go:310] 
	I1216 21:08:03.736944   60933 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 21:08:03.737145   60933 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1216 21:08:03.737211   60933 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1216 21:08:03.737287   60933 kubeadm.go:394] duration metric: took 7m57.891196073s to StartCluster
	I1216 21:08:03.737346   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:08:03.737417   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:08:03.789377   60933 cri.go:89] found id: ""
	I1216 21:08:03.789412   60933 logs.go:282] 0 containers: []
	W1216 21:08:03.789421   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:08:03.789426   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:08:03.789477   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:08:03.831122   60933 cri.go:89] found id: ""
	I1216 21:08:03.831150   60933 logs.go:282] 0 containers: []
	W1216 21:08:03.831161   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:08:03.831167   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:08:03.831236   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:08:03.870598   60933 cri.go:89] found id: ""
	I1216 21:08:03.870625   60933 logs.go:282] 0 containers: []
	W1216 21:08:03.870634   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:08:03.870640   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:08:03.870695   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:08:03.909060   60933 cri.go:89] found id: ""
	I1216 21:08:03.909095   60933 logs.go:282] 0 containers: []
	W1216 21:08:03.909103   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:08:03.909109   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:08:03.909163   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:08:03.946925   60933 cri.go:89] found id: ""
	I1216 21:08:03.946954   60933 logs.go:282] 0 containers: []
	W1216 21:08:03.946962   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:08:03.946968   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:08:03.947038   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:08:03.985596   60933 cri.go:89] found id: ""
	I1216 21:08:03.985629   60933 logs.go:282] 0 containers: []
	W1216 21:08:03.985650   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:08:03.985670   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:08:03.985736   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:08:04.022504   60933 cri.go:89] found id: ""
	I1216 21:08:04.022530   60933 logs.go:282] 0 containers: []
	W1216 21:08:04.022538   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:08:04.022545   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:08:04.022608   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:08:04.075636   60933 cri.go:89] found id: ""
	I1216 21:08:04.075667   60933 logs.go:282] 0 containers: []
	W1216 21:08:04.075677   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:08:04.075688   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:08:04.075707   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:08:04.180622   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:08:04.180653   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:08:04.180671   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:08:04.308091   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:08:04.308146   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:08:04.353240   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:08:04.353294   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:08:04.407919   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:08:04.407955   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1216 21:08:04.423583   60933 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1216 21:08:04.423644   60933 out.go:270] * 
	* 
	W1216 21:08:04.423727   60933 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 21:08:04.423749   60933 out.go:270] * 
	* 
	W1216 21:08:04.424576   60933 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 21:08:04.428361   60933 out.go:201] 
	W1216 21:08:04.429839   60933 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 21:08:04.429919   60933 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 21:08:04.429958   60933 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 21:08:04.431619   60933 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-847766 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-847766 -n old-k8s-version-847766
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-847766 -n old-k8s-version-847766: exit status 2 (246.116082ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-847766 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-847766 logs -n 25: (1.602120953s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p stopped-upgrade-976873                              | stopped-upgrade-976873       | jenkins | v1.34.0 | 16 Dec 24 20:49 UTC | 16 Dec 24 20:50 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-560677                           | kubernetes-upgrade-560677    | jenkins | v1.34.0 | 16 Dec 24 20:50 UTC | 16 Dec 24 20:50 UTC |
	| start   | -p no-preload-232338                                   | no-preload-232338            | jenkins | v1.34.0 | 16 Dec 24 20:50 UTC | 16 Dec 24 20:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-976873                              | stopped-upgrade-976873       | jenkins | v1.34.0 | 16 Dec 24 20:50 UTC | 16 Dec 24 20:50 UTC |
	| start   | -p embed-certs-606219                                  | embed-certs-606219           | jenkins | v1.34.0 | 16 Dec 24 20:50 UTC | 16 Dec 24 20:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-270954                              | cert-expiration-270954       | jenkins | v1.34.0 | 16 Dec 24 20:51 UTC | 16 Dec 24 20:51 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-606219            | embed-certs-606219           | jenkins | v1.34.0 | 16 Dec 24 20:51 UTC | 16 Dec 24 20:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-606219                                  | embed-certs-606219           | jenkins | v1.34.0 | 16 Dec 24 20:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-270954                              | cert-expiration-270954       | jenkins | v1.34.0 | 16 Dec 24 20:51 UTC | 16 Dec 24 20:51 UTC |
	| delete  | -p                                                     | disable-driver-mounts-384008 | jenkins | v1.34.0 | 16 Dec 24 20:51 UTC | 16 Dec 24 20:51 UTC |
	|         | disable-driver-mounts-384008                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-327790 | jenkins | v1.34.0 | 16 Dec 24 20:51 UTC | 16 Dec 24 20:52 UTC |
	|         | default-k8s-diff-port-327790                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-232338             | no-preload-232338            | jenkins | v1.34.0 | 16 Dec 24 20:52 UTC | 16 Dec 24 20:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-232338                                   | no-preload-232338            | jenkins | v1.34.0 | 16 Dec 24 20:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-327790  | default-k8s-diff-port-327790 | jenkins | v1.34.0 | 16 Dec 24 20:52 UTC | 16 Dec 24 20:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-327790 | jenkins | v1.34.0 | 16 Dec 24 20:52 UTC |                     |
	|         | default-k8s-diff-port-327790                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-847766        | old-k8s-version-847766       | jenkins | v1.34.0 | 16 Dec 24 20:53 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-606219                 | embed-certs-606219           | jenkins | v1.34.0 | 16 Dec 24 20:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-606219                                  | embed-certs-606219           | jenkins | v1.34.0 | 16 Dec 24 20:54 UTC | 16 Dec 24 21:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-232338                  | no-preload-232338            | jenkins | v1.34.0 | 16 Dec 24 20:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-232338                                   | no-preload-232338            | jenkins | v1.34.0 | 16 Dec 24 20:54 UTC | 16 Dec 24 21:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-327790       | default-k8s-diff-port-327790 | jenkins | v1.34.0 | 16 Dec 24 20:55 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-847766                              | old-k8s-version-847766       | jenkins | v1.34.0 | 16 Dec 24 20:55 UTC | 16 Dec 24 20:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-327790 | jenkins | v1.34.0 | 16 Dec 24 20:55 UTC | 16 Dec 24 21:04 UTC |
	|         | default-k8s-diff-port-327790                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-847766             | old-k8s-version-847766       | jenkins | v1.34.0 | 16 Dec 24 20:55 UTC | 16 Dec 24 20:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-847766                              | old-k8s-version-847766       | jenkins | v1.34.0 | 16 Dec 24 20:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 20:55:34
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 20:55:34.390724   60933 out.go:345] Setting OutFile to fd 1 ...
	I1216 20:55:34.390973   60933 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:55:34.390982   60933 out.go:358] Setting ErrFile to fd 2...
	I1216 20:55:34.390986   60933 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:55:34.391166   60933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-7083/.minikube/bin
	I1216 20:55:34.391763   60933 out.go:352] Setting JSON to false
	I1216 20:55:34.392611   60933 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5879,"bootTime":1734376655,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 20:55:34.392675   60933 start.go:139] virtualization: kvm guest
	I1216 20:55:34.394822   60933 out.go:177] * [old-k8s-version-847766] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1216 20:55:34.396184   60933 notify.go:220] Checking for updates...
	I1216 20:55:34.396201   60933 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 20:55:34.397724   60933 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 20:55:34.399130   60933 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 20:55:34.400470   60933 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-7083/.minikube
	I1216 20:55:34.401934   60933 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 20:55:34.403341   60933 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 20:55:34.405179   60933 config.go:182] Loaded profile config "old-k8s-version-847766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1216 20:55:34.405571   60933 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:55:34.405650   60933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:55:34.421052   60933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41215
	I1216 20:55:34.421523   60933 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:55:34.422018   60933 main.go:141] libmachine: Using API Version  1
	I1216 20:55:34.422035   60933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:55:34.422373   60933 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:55:34.422646   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:55:34.424565   60933 out.go:177] * Kubernetes 1.32.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.0
	I1216 20:55:34.426088   60933 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 20:55:34.426419   60933 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:55:34.426474   60933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:55:34.441375   60933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36915
	I1216 20:55:34.441833   60933 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:55:34.442327   60933 main.go:141] libmachine: Using API Version  1
	I1216 20:55:34.442349   60933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:55:34.442658   60933 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:55:34.442852   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:55:34.480512   60933 out.go:177] * Using the kvm2 driver based on existing profile
	I1216 20:55:34.481972   60933 start.go:297] selected driver: kvm2
	I1216 20:55:34.481988   60933 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-847766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-847766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 20:55:34.482125   60933 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 20:55:34.482826   60933 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 20:55:34.482907   60933 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20091-7083/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1216 20:55:34.498561   60933 install.go:137] /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1216 20:55:34.498953   60933 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 20:55:34.498981   60933 cni.go:84] Creating CNI manager for ""
	I1216 20:55:34.499022   60933 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 20:55:34.499060   60933 start.go:340] cluster config:
	{Name:old-k8s-version-847766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-847766 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 20:55:34.499164   60933 iso.go:125] acquiring lock: {Name:mk60ed2ba7ed00047edacd09f4f6bf84214f0831 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 20:55:34.501128   60933 out.go:177] * Starting "old-k8s-version-847766" primary control-plane node in "old-k8s-version-847766" cluster
	I1216 20:55:29.827520   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:55:32.899553   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:55:30.468027   60829 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 20:55:30.468071   60829 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1216 20:55:30.468079   60829 cache.go:56] Caching tarball of preloaded images
	I1216 20:55:30.468192   60829 preload.go:172] Found /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 20:55:30.468206   60829 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1216 20:55:30.468310   60829 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/config.json ...
	I1216 20:55:30.468540   60829 start.go:360] acquireMachinesLock for default-k8s-diff-port-327790: {Name:mk014ce1133f8d018fee1f78c9c31a354da6dd77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 20:55:34.502579   60933 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1216 20:55:34.502609   60933 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1216 20:55:34.502615   60933 cache.go:56] Caching tarball of preloaded images
	I1216 20:55:34.502716   60933 preload.go:172] Found /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 20:55:34.502731   60933 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1216 20:55:34.502823   60933 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/config.json ...
	I1216 20:55:34.503011   60933 start.go:360] acquireMachinesLock for old-k8s-version-847766: {Name:mk014ce1133f8d018fee1f78c9c31a354da6dd77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 20:55:38.979556   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:55:42.051532   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:55:48.131588   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:55:51.203568   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:55:57.283622   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:00.355490   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:06.435543   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:09.507559   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:15.587526   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:18.659657   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:24.739528   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:27.811498   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:33.891518   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:36.963554   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:43.043553   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:46.115578   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:52.195583   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:55.267507   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:01.347591   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:04.419562   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:10.499479   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:13.571540   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:19.651541   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:22.723545   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:28.803551   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:31.875527   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:37.955563   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:41.027520   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:47.107494   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:50.179566   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:56.259550   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:59.331540   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:05.411562   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:08.483592   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:14.563574   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:17.635522   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:23.715548   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:26.787559   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:32.867539   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:35.939502   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:42.019562   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:45.091545   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:51.171521   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:54.243542   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:57.248710   60421 start.go:364] duration metric: took 4m14.403979547s to acquireMachinesLock for "no-preload-232338"
	I1216 20:58:57.248796   60421 start.go:96] Skipping create...Using existing machine configuration
	I1216 20:58:57.248804   60421 fix.go:54] fixHost starting: 
	I1216 20:58:57.249232   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:58:57.249288   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:58:57.264905   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39773
	I1216 20:58:57.265423   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:58:57.265982   60421 main.go:141] libmachine: Using API Version  1
	I1216 20:58:57.266005   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:58:57.266396   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:58:57.266636   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:58:57.266807   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetState
	I1216 20:58:57.268705   60421 fix.go:112] recreateIfNeeded on no-preload-232338: state=Stopped err=<nil>
	I1216 20:58:57.268730   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	W1216 20:58:57.268918   60421 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 20:58:57.270855   60421 out.go:177] * Restarting existing kvm2 VM for "no-preload-232338" ...
	I1216 20:58:57.272142   60421 main.go:141] libmachine: (no-preload-232338) Calling .Start
	I1216 20:58:57.272374   60421 main.go:141] libmachine: (no-preload-232338) Ensuring networks are active...
	I1216 20:58:57.273245   60421 main.go:141] libmachine: (no-preload-232338) Ensuring network default is active
	I1216 20:58:57.273660   60421 main.go:141] libmachine: (no-preload-232338) Ensuring network mk-no-preload-232338 is active
	I1216 20:58:57.274025   60421 main.go:141] libmachine: (no-preload-232338) Getting domain xml...
	I1216 20:58:57.274673   60421 main.go:141] libmachine: (no-preload-232338) Creating domain...
	I1216 20:58:57.245632   60215 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 20:58:57.245753   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetMachineName
	I1216 20:58:57.246111   60215 buildroot.go:166] provisioning hostname "embed-certs-606219"
	I1216 20:58:57.246149   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetMachineName
	I1216 20:58:57.246399   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 20:58:57.248517   60215 machine.go:96] duration metric: took 4m37.414570479s to provisionDockerMachine
	I1216 20:58:57.248579   60215 fix.go:56] duration metric: took 4m37.437232743s for fixHost
	I1216 20:58:57.248587   60215 start.go:83] releasing machines lock for "embed-certs-606219", held for 4m37.437262865s
	W1216 20:58:57.248614   60215 start.go:714] error starting host: provision: host is not running
	W1216 20:58:57.248791   60215 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1216 20:58:57.248801   60215 start.go:729] Will try again in 5 seconds ...
	I1216 20:58:58.506521   60421 main.go:141] libmachine: (no-preload-232338) Waiting to get IP...
	I1216 20:58:58.507302   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:58:58.507627   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:58:58.507699   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:58:58.507613   61660 retry.go:31] will retry after 230.281045ms: waiting for machine to come up
	I1216 20:58:58.739343   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:58:58.739781   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:58:58.739804   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:58:58.739741   61660 retry.go:31] will retry after 323.962271ms: waiting for machine to come up
	I1216 20:58:59.065388   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:58:59.065856   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:58:59.065884   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:58:59.065816   61660 retry.go:31] will retry after 364.058481ms: waiting for machine to come up
	I1216 20:58:59.431290   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:58:59.431680   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:58:59.431707   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:58:59.431631   61660 retry.go:31] will retry after 569.845721ms: waiting for machine to come up
	I1216 20:59:00.003562   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:00.004030   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:00.004093   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:00.003988   61660 retry.go:31] will retry after 728.729909ms: waiting for machine to come up
	I1216 20:59:00.733954   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:00.734450   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:00.734482   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:00.734388   61660 retry.go:31] will retry after 679.479889ms: waiting for machine to come up
	I1216 20:59:01.415289   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:01.415739   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:01.415763   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:01.415690   61660 retry.go:31] will retry after 1.136560245s: waiting for machine to come up
	I1216 20:59:02.554094   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:02.554523   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:02.554548   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:02.554470   61660 retry.go:31] will retry after 1.299578742s: waiting for machine to come up
	I1216 20:59:02.250499   60215 start.go:360] acquireMachinesLock for embed-certs-606219: {Name:mk014ce1133f8d018fee1f78c9c31a354da6dd77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 20:59:03.855999   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:03.856366   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:03.856399   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:03.856300   61660 retry.go:31] will retry after 1.761269163s: waiting for machine to come up
	I1216 20:59:05.620383   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:05.620837   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:05.620858   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:05.620818   61660 retry.go:31] will retry after 2.100894301s: waiting for machine to come up
	I1216 20:59:07.723931   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:07.724300   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:07.724322   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:07.724273   61660 retry.go:31] will retry after 2.57501483s: waiting for machine to come up
	I1216 20:59:10.302185   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:10.302766   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:10.302802   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:10.302706   61660 retry.go:31] will retry after 2.68456895s: waiting for machine to come up
	I1216 20:59:17.060397   60829 start.go:364] duration metric: took 3m46.591813882s to acquireMachinesLock for "default-k8s-diff-port-327790"
	I1216 20:59:17.060456   60829 start.go:96] Skipping create...Using existing machine configuration
	I1216 20:59:17.060462   60829 fix.go:54] fixHost starting: 
	I1216 20:59:17.060878   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:59:17.060935   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:59:17.079226   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41365
	I1216 20:59:17.079715   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:59:17.080173   60829 main.go:141] libmachine: Using API Version  1
	I1216 20:59:17.080202   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:59:17.080554   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:59:17.080731   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:59:17.080873   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetState
	I1216 20:59:17.082368   60829 fix.go:112] recreateIfNeeded on default-k8s-diff-port-327790: state=Stopped err=<nil>
	I1216 20:59:17.082399   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	W1216 20:59:17.082570   60829 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 20:59:17.085104   60829 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-327790" ...
	I1216 20:59:12.988787   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:12.989140   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:12.989172   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:12.989098   61660 retry.go:31] will retry after 2.793178881s: waiting for machine to come up
	I1216 20:59:15.786011   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.786518   60421 main.go:141] libmachine: (no-preload-232338) Found IP for machine: 192.168.50.240
	I1216 20:59:15.786540   60421 main.go:141] libmachine: (no-preload-232338) Reserving static IP address...
	I1216 20:59:15.786564   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has current primary IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.786948   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "no-preload-232338", mac: "52:54:00:07:00:29", ip: "192.168.50.240"} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:15.786983   60421 main.go:141] libmachine: (no-preload-232338) DBG | skip adding static IP to network mk-no-preload-232338 - found existing host DHCP lease matching {name: "no-preload-232338", mac: "52:54:00:07:00:29", ip: "192.168.50.240"}
	I1216 20:59:15.786995   60421 main.go:141] libmachine: (no-preload-232338) Reserved static IP address: 192.168.50.240
	I1216 20:59:15.787009   60421 main.go:141] libmachine: (no-preload-232338) Waiting for SSH to be available...
	I1216 20:59:15.787022   60421 main.go:141] libmachine: (no-preload-232338) DBG | Getting to WaitForSSH function...
	I1216 20:59:15.789175   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.789509   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:15.789542   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.789633   60421 main.go:141] libmachine: (no-preload-232338) DBG | Using SSH client type: external
	I1216 20:59:15.789674   60421 main.go:141] libmachine: (no-preload-232338) DBG | Using SSH private key: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa (-rw-------)
	I1216 20:59:15.789709   60421 main.go:141] libmachine: (no-preload-232338) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1216 20:59:15.789718   60421 main.go:141] libmachine: (no-preload-232338) DBG | About to run SSH command:
	I1216 20:59:15.789726   60421 main.go:141] libmachine: (no-preload-232338) DBG | exit 0
	I1216 20:59:15.915980   60421 main.go:141] libmachine: (no-preload-232338) DBG | SSH cmd err, output: <nil>: 
	I1216 20:59:15.916473   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetConfigRaw
	I1216 20:59:15.917088   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetIP
	I1216 20:59:15.919782   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.920161   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:15.920192   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.920408   60421 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/config.json ...
	I1216 20:59:15.920636   60421 machine.go:93] provisionDockerMachine start ...
	I1216 20:59:15.920654   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:59:15.920875   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:15.923221   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.923623   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:15.923650   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.923784   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:15.923971   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:15.924107   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:15.924246   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:15.924395   60421 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:15.924715   60421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.240 22 <nil> <nil>}
	I1216 20:59:15.924729   60421 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 20:59:16.032079   60421 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1216 20:59:16.032108   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetMachineName
	I1216 20:59:16.032397   60421 buildroot.go:166] provisioning hostname "no-preload-232338"
	I1216 20:59:16.032423   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetMachineName
	I1216 20:59:16.032649   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:16.035467   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.035798   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.035826   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.036003   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:16.036184   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.036335   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.036494   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:16.036679   60421 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:16.036847   60421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.240 22 <nil> <nil>}
	I1216 20:59:16.036859   60421 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-232338 && echo "no-preload-232338" | sudo tee /etc/hostname
	I1216 20:59:16.161958   60421 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-232338
	
	I1216 20:59:16.161996   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:16.164585   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.165086   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.165130   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.165370   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:16.165578   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.165746   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.165877   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:16.166015   60421 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:16.166188   60421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.240 22 <nil> <nil>}
	I1216 20:59:16.166204   60421 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-232338' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-232338/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-232338' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 20:59:16.285329   60421 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 20:59:16.285374   60421 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20091-7083/.minikube CaCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20091-7083/.minikube}
	I1216 20:59:16.285407   60421 buildroot.go:174] setting up certificates
	I1216 20:59:16.285422   60421 provision.go:84] configureAuth start
	I1216 20:59:16.285432   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetMachineName
	I1216 20:59:16.285764   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetIP
	I1216 20:59:16.288773   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.289161   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.289192   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.289405   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:16.291687   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.292042   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.292072   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.292190   60421 provision.go:143] copyHostCerts
	I1216 20:59:16.292260   60421 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem, removing ...
	I1216 20:59:16.292274   60421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem
	I1216 20:59:16.292343   60421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem (1123 bytes)
	I1216 20:59:16.292470   60421 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem, removing ...
	I1216 20:59:16.292481   60421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem
	I1216 20:59:16.292508   60421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem (1679 bytes)
	I1216 20:59:16.292563   60421 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem, removing ...
	I1216 20:59:16.292570   60421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem
	I1216 20:59:16.292590   60421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem (1082 bytes)
	I1216 20:59:16.292649   60421 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem org=jenkins.no-preload-232338 san=[127.0.0.1 192.168.50.240 localhost minikube no-preload-232338]
	I1216 20:59:16.407096   60421 provision.go:177] copyRemoteCerts
	I1216 20:59:16.407187   60421 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 20:59:16.407227   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:16.410400   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.410725   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.410755   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.410977   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:16.411188   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.411437   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:16.411618   60421 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 20:59:16.498456   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 20:59:16.525297   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 20:59:16.551135   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 20:59:16.576040   60421 provision.go:87] duration metric: took 290.601941ms to configureAuth
	I1216 20:59:16.576074   60421 buildroot.go:189] setting minikube options for container-runtime
	I1216 20:59:16.576288   60421 config.go:182] Loaded profile config "no-preload-232338": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 20:59:16.576396   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:16.579169   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.579607   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.579641   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.579795   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:16.580016   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.580165   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.580311   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:16.580467   60421 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:16.580629   60421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.240 22 <nil> <nil>}
	I1216 20:59:16.580643   60421 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 20:59:16.816973   60421 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 20:59:16.816998   60421 machine.go:96] duration metric: took 896.349056ms to provisionDockerMachine
	I1216 20:59:16.817010   60421 start.go:293] postStartSetup for "no-preload-232338" (driver="kvm2")
	I1216 20:59:16.817030   60421 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 20:59:16.817044   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:59:16.817427   60421 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 20:59:16.817454   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:16.820182   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.820550   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.820578   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.820713   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:16.820914   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.821096   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:16.821274   60421 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 20:59:16.906513   60421 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 20:59:16.911314   60421 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 20:59:16.911346   60421 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/addons for local assets ...
	I1216 20:59:16.911482   60421 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/files for local assets ...
	I1216 20:59:16.911589   60421 filesync.go:149] local asset: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem -> 142542.pem in /etc/ssl/certs
	I1216 20:59:16.911720   60421 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 20:59:16.921890   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /etc/ssl/certs/142542.pem (1708 bytes)
	I1216 20:59:16.947114   60421 start.go:296] duration metric: took 130.089628ms for postStartSetup
	I1216 20:59:16.947192   60421 fix.go:56] duration metric: took 19.698385497s for fixHost
	I1216 20:59:16.947229   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:16.950156   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.950543   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.950575   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.950780   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:16.950996   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.951199   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.951394   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:16.951604   60421 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:16.951829   60421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.240 22 <nil> <nil>}
	I1216 20:59:16.951843   60421 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 20:59:17.060233   60421 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734382757.032597424
	
	I1216 20:59:17.060258   60421 fix.go:216] guest clock: 1734382757.032597424
	I1216 20:59:17.060265   60421 fix.go:229] Guest: 2024-12-16 20:59:17.032597424 +0000 UTC Remote: 2024-12-16 20:59:16.947203535 +0000 UTC m=+274.247918927 (delta=85.393889ms)
	I1216 20:59:17.060290   60421 fix.go:200] guest clock delta is within tolerance: 85.393889ms
	I1216 20:59:17.060294   60421 start.go:83] releasing machines lock for "no-preload-232338", held for 19.811539815s
	I1216 20:59:17.060318   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:59:17.060636   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetIP
	I1216 20:59:17.063346   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:17.063742   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:17.063764   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:17.063900   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:59:17.064419   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:59:17.064647   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:59:17.064766   60421 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 20:59:17.064804   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:17.064897   60421 ssh_runner.go:195] Run: cat /version.json
	I1216 20:59:17.064923   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:17.067687   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:17.067897   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:17.068129   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:17.068166   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:17.068314   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:17.068318   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:17.068343   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:17.068491   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:17.068573   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:17.068754   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:17.068778   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:17.068914   60421 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 20:59:17.069085   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:17.069229   60421 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 20:59:17.149502   60421 ssh_runner.go:195] Run: systemctl --version
	I1216 20:59:17.184981   60421 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 20:59:17.335267   60421 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 20:59:17.344316   60421 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 20:59:17.344381   60421 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 20:59:17.362422   60421 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 20:59:17.362450   60421 start.go:495] detecting cgroup driver to use...
	I1216 20:59:17.362526   60421 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 20:59:17.379285   60421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 20:59:17.394451   60421 docker.go:217] disabling cri-docker service (if available) ...
	I1216 20:59:17.394514   60421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 20:59:17.411856   60421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 20:59:17.428028   60421 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 20:59:17.557602   60421 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 20:59:17.699140   60421 docker.go:233] disabling docker service ...
	I1216 20:59:17.699215   60421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 20:59:17.715236   60421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 20:59:17.729268   60421 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 20:59:17.875729   60421 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 20:59:18.007569   60421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 20:59:18.022940   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 20:59:18.042227   60421 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1216 20:59:18.042292   60421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:18.053011   60421 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 20:59:18.053081   60421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:18.063767   60421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:18.074262   60421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:18.085372   60421 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 20:59:18.098366   60421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:18.113619   60421 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:18.134081   60421 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:18.145276   60421 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 20:59:18.155733   60421 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 20:59:18.155806   60421 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 20:59:18.170492   60421 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 20:59:18.182276   60421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:59:18.291278   60421 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 20:59:18.384618   60421 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 20:59:18.384700   60421 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 20:59:18.390755   60421 start.go:563] Will wait 60s for crictl version
	I1216 20:59:18.390823   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:18.395435   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 20:59:18.439300   60421 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 20:59:18.439390   60421 ssh_runner.go:195] Run: crio --version
	I1216 20:59:18.473976   60421 ssh_runner.go:195] Run: crio --version
	I1216 20:59:18.505262   60421 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1216 20:59:17.086569   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Start
	I1216 20:59:17.086752   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Ensuring networks are active...
	I1216 20:59:17.087656   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Ensuring network default is active
	I1216 20:59:17.088082   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Ensuring network mk-default-k8s-diff-port-327790 is active
	I1216 20:59:17.088482   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Getting domain xml...
	I1216 20:59:17.089219   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Creating domain...
	I1216 20:59:18.413245   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting to get IP...
	I1216 20:59:18.414327   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:18.414794   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:18.414907   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:18.414784   61807 retry.go:31] will retry after 229.952775ms: waiting for machine to come up
	I1216 20:59:18.646270   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:18.646677   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:18.646727   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:18.646654   61807 retry.go:31] will retry after 341.342128ms: waiting for machine to come up
	I1216 20:59:18.989285   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:18.989781   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:18.989809   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:18.989740   61807 retry.go:31] will retry after 311.937657ms: waiting for machine to come up
	I1216 20:59:19.303619   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:19.304189   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:19.304221   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:19.304131   61807 retry.go:31] will retry after 515.638431ms: waiting for machine to come up
	I1216 20:59:19.821478   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:19.821955   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:19.821997   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:19.821900   61807 retry.go:31] will retry after 590.835789ms: waiting for machine to come up
	I1216 20:59:18.506840   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetIP
	I1216 20:59:18.510260   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:18.510654   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:18.510689   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:18.510875   60421 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1216 20:59:18.515632   60421 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 20:59:18.529943   60421 kubeadm.go:883] updating cluster {Name:no-preload-232338 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.32.0 ClusterName:no-preload-232338 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.240 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 20:59:18.530128   60421 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 20:59:18.530184   60421 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 20:59:18.569526   60421 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1216 20:59:18.569555   60421 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.32.0 registry.k8s.io/kube-controller-manager:v1.32.0 registry.k8s.io/kube-scheduler:v1.32.0 registry.k8s.io/kube-proxy:v1.32.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.16-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1216 20:59:18.569650   60421 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:59:18.569669   60421 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.16-0
	I1216 20:59:18.569688   60421 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1216 20:59:18.569651   60421 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.32.0
	I1216 20:59:18.569774   60421 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.32.0
	I1216 20:59:18.569859   60421 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 20:59:18.569859   60421 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1216 20:59:18.570294   60421 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.32.0
	I1216 20:59:18.571577   60421 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.32.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.32.0
	I1216 20:59:18.571602   60421 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.16-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.16-0
	I1216 20:59:18.571582   60421 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:59:18.571585   60421 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.32.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.32.0
	I1216 20:59:18.571583   60421 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1216 20:59:18.571580   60421 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.32.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 20:59:18.571828   60421 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.32.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.32.0
	I1216 20:59:18.571953   60421 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1216 20:59:18.781052   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.32.0
	I1216 20:59:18.783569   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.16-0
	I1216 20:59:18.795901   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 20:59:18.799273   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1216 20:59:18.801098   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.32.0
	I1216 20:59:18.802163   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1216 20:59:18.828334   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.32.0
	I1216 20:59:18.897880   60421 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.32.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.32.0" does not exist at hash "a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5" in container runtime
	I1216 20:59:18.897942   60421 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.32.0
	I1216 20:59:18.898003   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:18.910616   60421 cache_images.go:116] "registry.k8s.io/etcd:3.5.16-0" needs transfer: "registry.k8s.io/etcd:3.5.16-0" does not exist at hash "a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc" in container runtime
	I1216 20:59:18.910665   60421 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.16-0
	I1216 20:59:18.910713   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:18.937699   60421 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.32.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.32.0" does not exist at hash "8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3" in container runtime
	I1216 20:59:18.937753   60421 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 20:59:18.937804   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:18.979455   60421 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.32.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.32.0" does not exist at hash "c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4" in container runtime
	I1216 20:59:18.979500   60421 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.32.0
	I1216 20:59:18.979540   60421 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1216 20:59:18.979555   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:18.979586   60421 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1216 20:59:18.979636   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:19.002472   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:59:19.076177   60421 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.32.0" needs transfer: "registry.k8s.io/kube-proxy:v1.32.0" does not exist at hash "040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08" in container runtime
	I1216 20:59:19.076217   60421 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.32.0
	I1216 20:59:19.076237   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.0
	I1216 20:59:19.076252   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:19.076292   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I1216 20:59:19.076351   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 20:59:19.076408   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1216 20:59:19.076487   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.0
	I1216 20:59:19.076511   60421 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1216 20:59:19.076536   60421 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:59:19.076580   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:19.204766   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:59:19.204846   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1216 20:59:19.204904   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.0
	I1216 20:59:19.204959   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.0
	I1216 20:59:19.205097   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.0
	I1216 20:59:19.205212   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I1216 20:59:19.205285   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 20:59:19.365421   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.0
	I1216 20:59:19.365466   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:59:19.365512   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1216 20:59:19.365620   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.0
	I1216 20:59:19.365652   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.0
	I1216 20:59:19.365771   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 20:59:19.365861   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I1216 20:59:19.539614   60421 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1216 20:59:19.539729   60421 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1216 20:59:19.539740   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.0
	I1216 20:59:19.539740   60421 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.0
	I1216 20:59:19.539817   60421 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.0
	I1216 20:59:19.539839   60421 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.32.0
	I1216 20:59:19.539840   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:59:19.539885   60421 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.32.0
	I1216 20:59:19.539949   60421 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.0
	I1216 20:59:19.540000   60421 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0
	I1216 20:59:19.540029   60421 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.32.0
	I1216 20:59:19.540062   60421 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.16-0
	I1216 20:59:19.555043   60421 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.32.0 (exists)
	I1216 20:59:19.555076   60421 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.32.0
	I1216 20:59:19.555135   60421 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.32.0
	I1216 20:59:19.555251   60421 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1216 20:59:19.630857   60421 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.16-0 (exists)
	I1216 20:59:19.630949   60421 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1216 20:59:19.630983   60421 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.0
	I1216 20:59:19.631030   60421 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.32.0 (exists)
	I1216 20:59:19.631065   60421 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.32.0
	I1216 20:59:19.631104   60421 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.32.0 (exists)
	I1216 20:59:19.631069   60421 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1216 20:59:21.838285   60421 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.32.0: (2.283119694s)
	I1216 20:59:21.838328   60421 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.0 from cache
	I1216 20:59:21.838359   60421 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1216 20:59:21.838394   60421 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.20725659s)
	I1216 20:59:21.838414   60421 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1216 20:59:21.838421   60421 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1216 20:59:21.838361   60421 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.32.0: (2.207274997s)
	I1216 20:59:21.838471   60421 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.32.0 (exists)
	I1216 20:59:20.414932   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:20.415565   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:20.415597   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:20.415502   61807 retry.go:31] will retry after 698.152518ms: waiting for machine to come up
	I1216 20:59:21.115103   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:21.115597   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:21.115627   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:21.115543   61807 retry.go:31] will retry after 891.02308ms: waiting for machine to come up
	I1216 20:59:22.008636   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:22.009070   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:22.009098   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:22.009015   61807 retry.go:31] will retry after 923.634312ms: waiting for machine to come up
	I1216 20:59:22.934238   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:22.934753   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:22.934784   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:22.934697   61807 retry.go:31] will retry after 1.142718367s: waiting for machine to come up
	I1216 20:59:24.078935   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:24.079398   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:24.079429   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:24.079363   61807 retry.go:31] will retry after 1.541033224s: waiting for machine to come up
	I1216 20:59:23.901058   60421 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.062611423s)
	I1216 20:59:23.901091   60421 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1216 20:59:23.901122   60421 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.16-0
	I1216 20:59:23.901169   60421 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.16-0
	I1216 20:59:25.621932   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:25.622401   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:25.622433   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:25.622364   61807 retry.go:31] will retry after 2.600280234s: waiting for machine to come up
	I1216 20:59:28.224296   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:28.224874   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:28.224892   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:28.224828   61807 retry.go:31] will retry after 3.308841216s: waiting for machine to come up
	I1216 20:59:27.793238   60421 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.16-0: (3.892042799s)
	I1216 20:59:27.793280   60421 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 from cache
	I1216 20:59:27.793321   60421 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.32.0
	I1216 20:59:27.793420   60421 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.32.0
	I1216 20:59:29.552069   60421 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.32.0: (1.758623471s)
	I1216 20:59:29.552102   60421 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.0 from cache
	I1216 20:59:29.552130   60421 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.32.0
	I1216 20:59:29.552177   60421 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.32.0
	I1216 20:59:31.708930   60421 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.32.0: (2.156719559s)
	I1216 20:59:31.708971   60421 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.0 from cache
	I1216 20:59:31.709008   60421 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1216 20:59:31.709057   60421 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1216 20:59:32.660657   60421 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1216 20:59:32.660713   60421 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.32.0
	I1216 20:59:32.660775   60421 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.32.0
	I1216 20:59:31.537153   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:31.537735   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:31.537795   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:31.537710   61807 retry.go:31] will retry after 4.259700282s: waiting for machine to come up
	I1216 20:59:37.140408   60933 start.go:364] duration metric: took 4m2.637362394s to acquireMachinesLock for "old-k8s-version-847766"
	I1216 20:59:37.140483   60933 start.go:96] Skipping create...Using existing machine configuration
	I1216 20:59:37.140491   60933 fix.go:54] fixHost starting: 
	I1216 20:59:37.140933   60933 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:59:37.140988   60933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:59:37.159075   60933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39873
	I1216 20:59:37.159574   60933 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:59:37.160140   60933 main.go:141] libmachine: Using API Version  1
	I1216 20:59:37.160172   60933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:59:37.160560   60933 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:59:37.160773   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:37.160889   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetState
	I1216 20:59:37.162561   60933 fix.go:112] recreateIfNeeded on old-k8s-version-847766: state=Stopped err=<nil>
	I1216 20:59:37.162603   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	W1216 20:59:37.162755   60933 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 20:59:37.166031   60933 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-847766" ...
	I1216 20:59:34.634064   60421 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.32.0: (1.973261206s)
	I1216 20:59:34.634117   60421 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.0 from cache
	I1216 20:59:34.634154   60421 cache_images.go:123] Successfully loaded all cached images
	I1216 20:59:34.634160   60421 cache_images.go:92] duration metric: took 16.064590407s to LoadCachedImages
	I1216 20:59:34.634171   60421 kubeadm.go:934] updating node { 192.168.50.240 8443 v1.32.0 crio true true} ...
	I1216 20:59:34.634331   60421 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-232338 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:no-preload-232338 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 20:59:34.634420   60421 ssh_runner.go:195] Run: crio config
	I1216 20:59:34.688034   60421 cni.go:84] Creating CNI manager for ""
	I1216 20:59:34.688059   60421 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 20:59:34.688068   60421 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 20:59:34.688093   60421 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.240 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-232338 NodeName:no-preload-232338 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 20:59:34.688277   60421 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-232338"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.240"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.240"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 20:59:34.688356   60421 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1216 20:59:34.699709   60421 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 20:59:34.699784   60421 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 20:59:34.710306   60421 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1216 20:59:34.732401   60421 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 20:59:34.757561   60421 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1216 20:59:34.776094   60421 ssh_runner.go:195] Run: grep 192.168.50.240	control-plane.minikube.internal$ /etc/hosts
	I1216 20:59:34.780341   60421 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.240	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 20:59:34.794025   60421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:59:34.930543   60421 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 20:59:34.948720   60421 certs.go:68] Setting up /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338 for IP: 192.168.50.240
	I1216 20:59:34.948752   60421 certs.go:194] generating shared ca certs ...
	I1216 20:59:34.948776   60421 certs.go:226] acquiring lock for ca certs: {Name:mk7f8f83a04be3d39897a025f51d4d8228b5a509 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:59:34.949035   60421 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key
	I1216 20:59:34.949094   60421 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key
	I1216 20:59:34.949115   60421 certs.go:256] generating profile certs ...
	I1216 20:59:34.949243   60421 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/client.key
	I1216 20:59:34.949327   60421 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/apiserver.key.674e04e3
	I1216 20:59:34.949379   60421 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/proxy-client.key
	I1216 20:59:34.949509   60421 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem (1338 bytes)
	W1216 20:59:34.949547   60421 certs.go:480] ignoring /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254_empty.pem, impossibly tiny 0 bytes
	I1216 20:59:34.949557   60421 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 20:59:34.949582   60421 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem (1082 bytes)
	I1216 20:59:34.949604   60421 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem (1123 bytes)
	I1216 20:59:34.949627   60421 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem (1679 bytes)
	I1216 20:59:34.949662   60421 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem (1708 bytes)
	I1216 20:59:34.950648   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 20:59:34.994491   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 20:59:35.029853   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 20:59:35.058834   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 20:59:35.096870   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 20:59:35.126467   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 20:59:35.160826   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 20:59:35.186344   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 20:59:35.211125   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /usr/share/ca-certificates/142542.pem (1708 bytes)
	I1216 20:59:35.238705   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 20:59:35.266485   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem --> /usr/share/ca-certificates/14254.pem (1338 bytes)
	I1216 20:59:35.291729   60421 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 20:59:35.311939   60421 ssh_runner.go:195] Run: openssl version
	I1216 20:59:35.318397   60421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142542.pem && ln -fs /usr/share/ca-certificates/142542.pem /etc/ssl/certs/142542.pem"
	I1216 20:59:35.332081   60421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142542.pem
	I1216 20:59:35.336967   60421 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 19:42 /usr/share/ca-certificates/142542.pem
	I1216 20:59:35.337022   60421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142542.pem
	I1216 20:59:35.343307   60421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142542.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 20:59:35.356515   60421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 20:59:35.370380   60421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:59:35.375538   60421 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:59:35.375589   60421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:59:35.381736   60421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 20:59:35.395677   60421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14254.pem && ln -fs /usr/share/ca-certificates/14254.pem /etc/ssl/certs/14254.pem"
	I1216 20:59:35.409029   60421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14254.pem
	I1216 20:59:35.414358   60421 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 19:42 /usr/share/ca-certificates/14254.pem
	I1216 20:59:35.414427   60421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14254.pem
	I1216 20:59:35.421352   60421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14254.pem /etc/ssl/certs/51391683.0"
	I1216 20:59:35.435322   60421 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 20:59:35.440479   60421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 20:59:35.447408   60421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 20:59:35.453992   60421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 20:59:35.460713   60421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 20:59:35.467109   60421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 20:59:35.473412   60421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 20:59:35.479720   60421 kubeadm.go:392] StartCluster: {Name:no-preload-232338 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32
.0 ClusterName:no-preload-232338 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.240 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 20:59:35.479824   60421 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 20:59:35.479901   60421 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 20:59:35.521238   60421 cri.go:89] found id: ""
	I1216 20:59:35.521331   60421 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 20:59:35.534818   60421 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1216 20:59:35.534848   60421 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1216 20:59:35.534893   60421 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 20:59:35.547460   60421 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 20:59:35.548501   60421 kubeconfig.go:125] found "no-preload-232338" server: "https://192.168.50.240:8443"
	I1216 20:59:35.550575   60421 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 20:59:35.560957   60421 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.240
	I1216 20:59:35.561018   60421 kubeadm.go:1160] stopping kube-system containers ...
	I1216 20:59:35.561033   60421 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 20:59:35.561094   60421 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 20:59:35.598970   60421 cri.go:89] found id: ""
	I1216 20:59:35.599082   60421 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 20:59:35.618027   60421 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 20:59:35.629418   60421 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 20:59:35.629455   60421 kubeadm.go:157] found existing configuration files:
	
	I1216 20:59:35.629501   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 20:59:35.639825   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 20:59:35.639896   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 20:59:35.650676   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 20:59:35.662171   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 20:59:35.662228   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 20:59:35.674780   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 20:59:35.686565   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 20:59:35.686640   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 20:59:35.698956   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 20:59:35.710813   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 20:59:35.710874   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 20:59:35.723307   60421 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 20:59:35.734712   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:35.863375   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:37.021512   60421 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.158099337s)
	I1216 20:59:37.021546   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:37.269641   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:37.348978   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:37.428210   60421 api_server.go:52] waiting for apiserver process to appear ...
	I1216 20:59:37.428296   60421 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:59:35.800344   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.800861   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Found IP for machine: 192.168.39.162
	I1216 20:59:35.800889   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has current primary IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.800899   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Reserving static IP address...
	I1216 20:59:35.801367   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-327790", mac: "52:54:00:68:47:d5", ip: "192.168.39.162"} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:35.801395   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Reserved static IP address: 192.168.39.162
	I1216 20:59:35.801419   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | skip adding static IP to network mk-default-k8s-diff-port-327790 - found existing host DHCP lease matching {name: "default-k8s-diff-port-327790", mac: "52:54:00:68:47:d5", ip: "192.168.39.162"}
	I1216 20:59:35.801439   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for SSH to be available...
	I1216 20:59:35.801452   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Getting to WaitForSSH function...
	I1216 20:59:35.803875   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.804226   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:35.804257   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.804407   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Using SSH client type: external
	I1216 20:59:35.804439   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Using SSH private key: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa (-rw-------)
	I1216 20:59:35.804472   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.162 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1216 20:59:35.804493   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | About to run SSH command:
	I1216 20:59:35.804517   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | exit 0
	I1216 20:59:35.935325   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | SSH cmd err, output: <nil>: 
	I1216 20:59:35.935765   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetConfigRaw
	I1216 20:59:35.936442   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetIP
	I1216 20:59:35.938945   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.939369   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:35.939395   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.939654   60829 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/config.json ...
	I1216 20:59:35.939915   60829 machine.go:93] provisionDockerMachine start ...
	I1216 20:59:35.939938   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:59:35.940183   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:35.942412   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.942758   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:35.942787   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.942885   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:35.943067   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:35.943205   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:35.943347   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:35.943501   60829 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:35.943687   60829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1216 20:59:35.943697   60829 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 20:59:36.060257   60829 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1216 20:59:36.060297   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetMachineName
	I1216 20:59:36.060608   60829 buildroot.go:166] provisioning hostname "default-k8s-diff-port-327790"
	I1216 20:59:36.060634   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetMachineName
	I1216 20:59:36.060853   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:36.063758   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.064060   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:36.064097   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.064222   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:36.064427   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.064600   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.064745   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:36.064910   60829 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:36.065132   60829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1216 20:59:36.065151   60829 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-327790 && echo "default-k8s-diff-port-327790" | sudo tee /etc/hostname
	I1216 20:59:36.194522   60829 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-327790
	
	I1216 20:59:36.194555   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:36.197422   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.197770   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:36.197818   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.198007   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:36.198217   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.198446   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.198606   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:36.198803   60829 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:36.199037   60829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1216 20:59:36.199062   60829 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-327790' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-327790/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-327790' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 20:59:36.320779   60829 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 20:59:36.320808   60829 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20091-7083/.minikube CaCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20091-7083/.minikube}
	I1216 20:59:36.320833   60829 buildroot.go:174] setting up certificates
	I1216 20:59:36.320845   60829 provision.go:84] configureAuth start
	I1216 20:59:36.320854   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetMachineName
	I1216 20:59:36.321171   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetIP
	I1216 20:59:36.323701   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.324019   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:36.324044   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.324254   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:36.326002   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.326317   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:36.326348   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.326478   60829 provision.go:143] copyHostCerts
	I1216 20:59:36.326555   60829 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem, removing ...
	I1216 20:59:36.326567   60829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem
	I1216 20:59:36.326635   60829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem (1082 bytes)
	I1216 20:59:36.326747   60829 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem, removing ...
	I1216 20:59:36.326759   60829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem
	I1216 20:59:36.326786   60829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem (1123 bytes)
	I1216 20:59:36.326856   60829 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem, removing ...
	I1216 20:59:36.326866   60829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem
	I1216 20:59:36.326887   60829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem (1679 bytes)
	I1216 20:59:36.326949   60829 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-327790 san=[127.0.0.1 192.168.39.162 default-k8s-diff-port-327790 localhost minikube]
	I1216 20:59:36.480215   60829 provision.go:177] copyRemoteCerts
	I1216 20:59:36.480278   60829 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 20:59:36.480304   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:36.482859   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.483213   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:36.483258   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.483500   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:36.483712   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.483903   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:36.484087   60829 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 20:59:36.571252   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 20:59:36.599399   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1216 20:59:36.624194   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 20:59:36.649294   60829 provision.go:87] duration metric: took 328.437433ms to configureAuth
	I1216 20:59:36.649325   60829 buildroot.go:189] setting minikube options for container-runtime
	I1216 20:59:36.649494   60829 config.go:182] Loaded profile config "default-k8s-diff-port-327790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 20:59:36.649567   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:36.652411   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.652838   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:36.652868   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.653006   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:36.653264   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.653490   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.653704   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:36.653879   60829 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:36.654059   60829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1216 20:59:36.654076   60829 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 20:59:36.893006   60829 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 20:59:36.893043   60829 machine.go:96] duration metric: took 953.113126ms to provisionDockerMachine
	I1216 20:59:36.893057   60829 start.go:293] postStartSetup for "default-k8s-diff-port-327790" (driver="kvm2")
	I1216 20:59:36.893070   60829 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 20:59:36.893101   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:59:36.893466   60829 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 20:59:36.893494   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:36.896151   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.896531   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:36.896561   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.896683   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:36.896893   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.897100   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:36.897280   60829 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 20:59:36.982077   60829 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 20:59:36.986598   60829 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 20:59:36.986624   60829 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/addons for local assets ...
	I1216 20:59:36.986702   60829 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/files for local assets ...
	I1216 20:59:36.986795   60829 filesync.go:149] local asset: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem -> 142542.pem in /etc/ssl/certs
	I1216 20:59:36.986919   60829 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 20:59:36.996453   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /etc/ssl/certs/142542.pem (1708 bytes)
	I1216 20:59:37.021838   60829 start.go:296] duration metric: took 128.770799ms for postStartSetup
	I1216 20:59:37.021873   60829 fix.go:56] duration metric: took 19.961410312s for fixHost
	I1216 20:59:37.021896   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:37.024668   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.025171   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:37.025207   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.025369   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:37.025591   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:37.025746   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:37.025892   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:37.026040   60829 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:37.026257   60829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1216 20:59:37.026273   60829 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 20:59:37.140228   60829 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734382777.110726967
	
	I1216 20:59:37.140254   60829 fix.go:216] guest clock: 1734382777.110726967
	I1216 20:59:37.140264   60829 fix.go:229] Guest: 2024-12-16 20:59:37.110726967 +0000 UTC Remote: 2024-12-16 20:59:37.021877328 +0000 UTC m=+246.706572335 (delta=88.849639ms)
	I1216 20:59:37.140308   60829 fix.go:200] guest clock delta is within tolerance: 88.849639ms
	I1216 20:59:37.140315   60829 start.go:83] releasing machines lock for "default-k8s-diff-port-327790", held for 20.079880217s
	I1216 20:59:37.140347   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:59:37.140632   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetIP
	I1216 20:59:37.143268   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.143748   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:37.143775   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.143983   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:59:37.144601   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:59:37.144789   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:59:37.144883   60829 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 20:59:37.144930   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:37.145028   60829 ssh_runner.go:195] Run: cat /version.json
	I1216 20:59:37.145060   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:37.147817   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.148192   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:37.148219   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.148315   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.148364   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:37.148576   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:37.148755   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:37.148776   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.148804   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:37.148964   60829 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 20:59:37.149020   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:37.149141   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:37.149285   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:37.149439   60829 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 20:59:37.232354   60829 ssh_runner.go:195] Run: systemctl --version
	I1216 20:59:37.261803   60829 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 20:59:37.416094   60829 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 20:59:37.425458   60829 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 20:59:37.425566   60829 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 20:59:37.448873   60829 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 20:59:37.448914   60829 start.go:495] detecting cgroup driver to use...
	I1216 20:59:37.449014   60829 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 20:59:37.472474   60829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 20:59:37.492445   60829 docker.go:217] disabling cri-docker service (if available) ...
	I1216 20:59:37.492518   60829 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 20:59:37.510478   60829 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 20:59:37.525452   60829 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 20:59:37.642105   60829 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 20:59:37.814506   60829 docker.go:233] disabling docker service ...
	I1216 20:59:37.814590   60829 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 20:59:37.829046   60829 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 20:59:37.845049   60829 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 20:59:38.009931   60829 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 20:59:38.158000   60829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 20:59:38.174376   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 20:59:38.197489   60829 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1216 20:59:38.197555   60829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:38.213974   60829 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 20:59:38.214034   60829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:38.230383   60829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:38.244599   60829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:38.257574   60829 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 20:59:38.273377   60829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:38.285854   60829 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:38.312687   60829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:38.329105   60829 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 20:59:38.343596   60829 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 20:59:38.343679   60829 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 20:59:38.362530   60829 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 20:59:38.374384   60829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:59:38.564793   60829 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 20:59:38.682792   60829 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 20:59:38.682873   60829 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 20:59:38.689164   60829 start.go:563] Will wait 60s for crictl version
	I1216 20:59:38.689251   60829 ssh_runner.go:195] Run: which crictl
	I1216 20:59:38.693994   60829 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 20:59:38.746808   60829 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 20:59:38.746913   60829 ssh_runner.go:195] Run: crio --version
	I1216 20:59:38.788490   60829 ssh_runner.go:195] Run: crio --version
	I1216 20:59:38.823957   60829 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1216 20:59:37.167470   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .Start
	I1216 20:59:37.167715   60933 main.go:141] libmachine: (old-k8s-version-847766) Ensuring networks are active...
	I1216 20:59:37.168626   60933 main.go:141] libmachine: (old-k8s-version-847766) Ensuring network default is active
	I1216 20:59:37.169114   60933 main.go:141] libmachine: (old-k8s-version-847766) Ensuring network mk-old-k8s-version-847766 is active
	I1216 20:59:37.169670   60933 main.go:141] libmachine: (old-k8s-version-847766) Getting domain xml...
	I1216 20:59:37.170345   60933 main.go:141] libmachine: (old-k8s-version-847766) Creating domain...
	I1216 20:59:38.535579   60933 main.go:141] libmachine: (old-k8s-version-847766) Waiting to get IP...
	I1216 20:59:38.536661   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:38.537089   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:38.537174   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:38.537078   61973 retry.go:31] will retry after 277.62307ms: waiting for machine to come up
	I1216 20:59:38.816788   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:38.817329   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:38.817360   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:38.817272   61973 retry.go:31] will retry after 346.694382ms: waiting for machine to come up
	I1216 20:59:39.165778   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:39.166377   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:39.166436   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:39.166355   61973 retry.go:31] will retry after 416.599295ms: waiting for machine to come up
	I1216 20:59:38.825413   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetIP
	I1216 20:59:38.828442   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:38.828836   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:38.828870   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:38.829125   60829 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1216 20:59:38.833715   60829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 20:59:38.848989   60829 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-327790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.32.0 ClusterName:default-k8s-diff-port-327790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 20:59:38.849121   60829 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 20:59:38.849169   60829 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 20:59:38.891356   60829 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1216 20:59:38.891432   60829 ssh_runner.go:195] Run: which lz4
	I1216 20:59:38.896669   60829 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 20:59:38.901209   60829 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 20:59:38.901253   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1216 20:59:37.928929   60421 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:59:38.428939   60421 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:59:38.454184   60421 api_server.go:72] duration metric: took 1.02597754s to wait for apiserver process to appear ...
	I1216 20:59:38.454211   60421 api_server.go:88] waiting for apiserver healthz status ...
	I1216 20:59:38.454252   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:38.454842   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": dial tcp 192.168.50.240:8443: connect: connection refused
	I1216 20:59:38.954378   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:39.585259   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:39.585762   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:39.585791   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:39.585737   61973 retry.go:31] will retry after 526.969594ms: waiting for machine to come up
	I1216 20:59:40.114653   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:40.115175   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:40.115205   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:40.115140   61973 retry.go:31] will retry after 502.283372ms: waiting for machine to come up
	I1216 20:59:40.619067   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:40.619633   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:40.619682   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:40.619571   61973 retry.go:31] will retry after 764.799982ms: waiting for machine to come up
	I1216 20:59:41.385515   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:41.386066   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:41.386100   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:41.386027   61973 retry.go:31] will retry after 982.237202ms: waiting for machine to come up
	I1216 20:59:42.369934   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:42.370414   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:42.370449   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:42.370373   61973 retry.go:31] will retry after 1.163280736s: waiting for machine to come up
	I1216 20:59:43.534829   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:43.535194   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:43.535224   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:43.535143   61973 retry.go:31] will retry after 1.630958514s: waiting for machine to come up
	I1216 20:59:40.539994   60829 crio.go:462] duration metric: took 1.643361409s to copy over tarball
	I1216 20:59:40.540066   60829 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 20:59:42.840346   60829 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.30025199s)
	I1216 20:59:42.840382   60829 crio.go:469] duration metric: took 2.300357568s to extract the tarball
	I1216 20:59:42.840392   60829 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 20:59:42.881650   60829 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 20:59:42.928089   60829 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 20:59:42.928120   60829 cache_images.go:84] Images are preloaded, skipping loading
	I1216 20:59:42.928129   60829 kubeadm.go:934] updating node { 192.168.39.162 8444 v1.32.0 crio true true} ...
	I1216 20:59:42.928222   60829 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-327790 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:default-k8s-diff-port-327790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 20:59:42.928286   60829 ssh_runner.go:195] Run: crio config
	I1216 20:59:42.983315   60829 cni.go:84] Creating CNI manager for ""
	I1216 20:59:42.983348   60829 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 20:59:42.983360   60829 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 20:59:42.983396   60829 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.162 APIServerPort:8444 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-327790 NodeName:default-k8s-diff-port-327790 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.162"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.162 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 20:59:42.983556   60829 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.162
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-327790"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.162"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.162"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 20:59:42.983631   60829 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1216 20:59:42.996192   60829 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 20:59:42.996283   60829 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 20:59:43.008389   60829 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1216 20:59:43.027984   60829 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 20:59:43.045672   60829 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1216 20:59:43.063620   60829 ssh_runner.go:195] Run: grep 192.168.39.162	control-plane.minikube.internal$ /etc/hosts
	I1216 20:59:43.067925   60829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.162	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 20:59:43.082946   60829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:59:43.220929   60829 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 20:59:43.243843   60829 certs.go:68] Setting up /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790 for IP: 192.168.39.162
	I1216 20:59:43.243870   60829 certs.go:194] generating shared ca certs ...
	I1216 20:59:43.243888   60829 certs.go:226] acquiring lock for ca certs: {Name:mk7f8f83a04be3d39897a025f51d4d8228b5a509 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:59:43.244125   60829 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key
	I1216 20:59:43.244185   60829 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key
	I1216 20:59:43.244200   60829 certs.go:256] generating profile certs ...
	I1216 20:59:43.244324   60829 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/client.key
	I1216 20:59:43.244400   60829 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/apiserver.key.0f0bf709
	I1216 20:59:43.244449   60829 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/proxy-client.key
	I1216 20:59:43.244606   60829 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem (1338 bytes)
	W1216 20:59:43.244649   60829 certs.go:480] ignoring /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254_empty.pem, impossibly tiny 0 bytes
	I1216 20:59:43.244666   60829 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 20:59:43.244689   60829 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem (1082 bytes)
	I1216 20:59:43.244711   60829 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem (1123 bytes)
	I1216 20:59:43.244731   60829 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem (1679 bytes)
	I1216 20:59:43.244776   60829 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem (1708 bytes)
	I1216 20:59:43.245449   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 20:59:43.283598   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 20:59:43.309321   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 20:59:43.343071   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 20:59:43.379763   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1216 20:59:43.409794   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 20:59:43.437074   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 20:59:43.462616   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 20:59:43.487711   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 20:59:43.512636   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem --> /usr/share/ca-certificates/14254.pem (1338 bytes)
	I1216 20:59:43.539050   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /usr/share/ca-certificates/142542.pem (1708 bytes)
	I1216 20:59:43.566507   60829 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 20:59:43.584425   60829 ssh_runner.go:195] Run: openssl version
	I1216 20:59:43.590996   60829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 20:59:43.604384   60829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:59:43.609342   60829 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:59:43.609404   60829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:59:43.615902   60829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 20:59:43.627432   60829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14254.pem && ln -fs /usr/share/ca-certificates/14254.pem /etc/ssl/certs/14254.pem"
	I1216 20:59:43.638929   60829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14254.pem
	I1216 20:59:43.644189   60829 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 19:42 /usr/share/ca-certificates/14254.pem
	I1216 20:59:43.644267   60829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14254.pem
	I1216 20:59:43.650550   60829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14254.pem /etc/ssl/certs/51391683.0"
	I1216 20:59:43.662678   60829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142542.pem && ln -fs /usr/share/ca-certificates/142542.pem /etc/ssl/certs/142542.pem"
	I1216 20:59:43.674981   60829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142542.pem
	I1216 20:59:43.680022   60829 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 19:42 /usr/share/ca-certificates/142542.pem
	I1216 20:59:43.680113   60829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142542.pem
	I1216 20:59:43.686159   60829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142542.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 20:59:43.697897   60829 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 20:59:43.702835   60829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 20:59:43.709262   60829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 20:59:43.716370   60829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 20:59:43.725031   60829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 20:59:43.732876   60829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 20:59:43.739810   60829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 20:59:43.746998   60829 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-327790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.32.0 ClusterName:default-k8s-diff-port-327790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 20:59:43.747131   60829 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 20:59:43.747189   60829 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 20:59:43.791895   60829 cri.go:89] found id: ""
	I1216 20:59:43.791979   60829 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 20:59:43.802858   60829 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1216 20:59:43.802886   60829 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1216 20:59:43.802943   60829 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 20:59:43.813313   60829 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 20:59:43.814296   60829 kubeconfig.go:125] found "default-k8s-diff-port-327790" server: "https://192.168.39.162:8444"
	I1216 20:59:43.816374   60829 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 20:59:43.825834   60829 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.162
	I1216 20:59:43.825871   60829 kubeadm.go:1160] stopping kube-system containers ...
	I1216 20:59:43.825884   60829 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 20:59:43.825934   60829 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 20:59:43.870890   60829 cri.go:89] found id: ""
	I1216 20:59:43.870965   60829 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 20:59:43.888155   60829 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 20:59:43.898356   60829 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 20:59:43.898381   60829 kubeadm.go:157] found existing configuration files:
	
	I1216 20:59:43.898445   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1216 20:59:43.908232   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 20:59:43.908310   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 20:59:43.918637   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1216 20:59:43.928255   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 20:59:43.928343   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 20:59:43.938479   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1216 20:59:43.948085   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 20:59:43.948157   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 20:59:43.959080   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1216 20:59:43.969218   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 20:59:43.969275   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 20:59:43.980063   60829 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 20:59:43.990768   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:44.125741   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:44.845177   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:45.049512   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:45.162055   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:45.284927   60829 api_server.go:52] waiting for apiserver process to appear ...
	I1216 20:59:45.285036   60829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:59:43.954985   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 20:59:43.955087   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:45.168144   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:45.168719   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:45.168750   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:45.168671   61973 retry.go:31] will retry after 1.835631107s: waiting for machine to come up
	I1216 20:59:47.005854   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:47.006380   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:47.006422   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:47.006339   61973 retry.go:31] will retry after 1.943800898s: waiting for machine to come up
	I1216 20:59:48.951552   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:48.952050   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:48.952114   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:48.952008   61973 retry.go:31] will retry after 2.949898251s: waiting for machine to come up
	I1216 20:59:45.785964   60829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:59:46.285989   60829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:59:46.339555   60829 api_server.go:72] duration metric: took 1.054628295s to wait for apiserver process to appear ...
	I1216 20:59:46.339597   60829 api_server.go:88] waiting for apiserver healthz status ...
	I1216 20:59:46.339636   60829 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1216 20:59:46.340197   60829 api_server.go:269] stopped: https://192.168.39.162:8444/healthz: Get "https://192.168.39.162:8444/healthz": dial tcp 192.168.39.162:8444: connect: connection refused
	I1216 20:59:46.839771   60829 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1216 20:59:49.461907   60829 api_server.go:279] https://192.168.39.162:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 20:59:49.461943   60829 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 20:59:49.461958   60829 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1216 20:59:49.513069   60829 api_server.go:279] https://192.168.39.162:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 20:59:49.513121   60829 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 20:59:49.840517   60829 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1216 20:59:49.846051   60829 api_server.go:279] https://192.168.39.162:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 20:59:49.846086   60829 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 20:59:50.339824   60829 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1216 20:59:50.347663   60829 api_server.go:279] https://192.168.39.162:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 20:59:50.347708   60829 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 20:59:50.840385   60829 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1216 20:59:50.844943   60829 api_server.go:279] https://192.168.39.162:8444/healthz returned 200:
	ok
	I1216 20:59:50.854518   60829 api_server.go:141] control plane version: v1.32.0
	I1216 20:59:50.854546   60829 api_server.go:131] duration metric: took 4.514941385s to wait for apiserver health ...
	I1216 20:59:50.854554   60829 cni.go:84] Creating CNI manager for ""
	I1216 20:59:50.854560   60829 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 20:59:50.856538   60829 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 20:59:48.956352   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 20:59:48.956414   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:51.905108   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:51.905560   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:51.905594   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:51.905505   61973 retry.go:31] will retry after 3.44069953s: waiting for machine to come up
	I1216 20:59:50.858169   60829 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 20:59:50.882809   60829 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 20:59:50.912787   60829 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 20:59:50.933650   60829 system_pods.go:59] 8 kube-system pods found
	I1216 20:59:50.933693   60829 system_pods.go:61] "coredns-668d6bf9bc-tqh9s" [56b4db37-b6bc-49eb-b45f-b8b4d1f16eed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 20:59:50.933705   60829 system_pods.go:61] "etcd-default-k8s-diff-port-327790" [067f7c41-3763-42d3-af06-ad50fad3d206] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 20:59:50.933713   60829 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-327790" [f1964b5b-9d2b-4f82-afc6-2f359c9b8827] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 20:59:50.933722   60829 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-327790" [fd7479e3-be26-4bb0-b53a-e40766a33996] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 20:59:50.933742   60829 system_pods.go:61] "kube-proxy-mplxr" [027abdc5-7022-4528-a93f-36f3b10115ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 20:59:50.933751   60829 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-327790" [d7416a53-ccb4-46fd-9992-46cbf7ec0a3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 20:59:50.933763   60829 system_pods.go:61] "metrics-server-f79f97bbb-hlt7s" [d42906e3-387c-493e-9d06-5bb654dc9784] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 20:59:50.933772   60829 system_pods.go:61] "storage-provisioner" [c774635a-faca-4a1a-8f4e-2161447ebaa1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 20:59:50.933785   60829 system_pods.go:74] duration metric: took 20.968988ms to wait for pod list to return data ...
	I1216 20:59:50.933804   60829 node_conditions.go:102] verifying NodePressure condition ...
	I1216 20:59:50.937958   60829 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 20:59:50.937986   60829 node_conditions.go:123] node cpu capacity is 2
	I1216 20:59:50.938008   60829 node_conditions.go:105] duration metric: took 4.196302ms to run NodePressure ...
	I1216 20:59:50.938030   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:51.231412   60829 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1216 20:59:51.236005   60829 kubeadm.go:739] kubelet initialised
	I1216 20:59:51.236029   60829 kubeadm.go:740] duration metric: took 4.585977ms waiting for restarted kubelet to initialise ...
	I1216 20:59:51.236042   60829 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 20:59:51.243608   60829 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-tqh9s" in "kube-system" namespace to be "Ready" ...
	I1216 20:59:53.250907   60829 pod_ready.go:103] pod "coredns-668d6bf9bc-tqh9s" in "kube-system" namespace has status "Ready":"False"
	I1216 20:59:56.696377   60215 start.go:364] duration metric: took 54.44579772s to acquireMachinesLock for "embed-certs-606219"
	I1216 20:59:56.696450   60215 start.go:96] Skipping create...Using existing machine configuration
	I1216 20:59:56.696470   60215 fix.go:54] fixHost starting: 
	I1216 20:59:56.696862   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:59:56.696902   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:59:56.714627   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42069
	I1216 20:59:56.715074   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:59:56.715599   60215 main.go:141] libmachine: Using API Version  1
	I1216 20:59:56.715629   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:59:56.715953   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:59:56.716116   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 20:59:56.716252   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetState
	I1216 20:59:56.717876   60215 fix.go:112] recreateIfNeeded on embed-certs-606219: state=Stopped err=<nil>
	I1216 20:59:56.717902   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	W1216 20:59:56.718088   60215 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 20:59:56.720072   60215 out.go:177] * Restarting existing kvm2 VM for "embed-certs-606219" ...
	I1216 20:59:53.957328   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 20:59:53.957395   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:55.349557   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.350105   60933 main.go:141] libmachine: (old-k8s-version-847766) Found IP for machine: 192.168.72.240
	I1216 20:59:55.350129   60933 main.go:141] libmachine: (old-k8s-version-847766) Reserving static IP address...
	I1216 20:59:55.350140   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has current primary IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.350574   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "old-k8s-version-847766", mac: "52:54:00:c4:f2:8a", ip: "192.168.72.240"} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.350608   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | skip adding static IP to network mk-old-k8s-version-847766 - found existing host DHCP lease matching {name: "old-k8s-version-847766", mac: "52:54:00:c4:f2:8a", ip: "192.168.72.240"}
	I1216 20:59:55.350623   60933 main.go:141] libmachine: (old-k8s-version-847766) Reserved static IP address: 192.168.72.240
	I1216 20:59:55.350646   60933 main.go:141] libmachine: (old-k8s-version-847766) Waiting for SSH to be available...
	I1216 20:59:55.350662   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | Getting to WaitForSSH function...
	I1216 20:59:55.353011   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.353346   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.353369   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.353535   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | Using SSH client type: external
	I1216 20:59:55.353560   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | Using SSH private key: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa (-rw-------)
	I1216 20:59:55.353592   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1216 20:59:55.353606   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | About to run SSH command:
	I1216 20:59:55.353621   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | exit 0
	I1216 20:59:55.480726   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | SSH cmd err, output: <nil>: 
	I1216 20:59:55.481062   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetConfigRaw
	I1216 20:59:55.481692   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetIP
	I1216 20:59:55.484113   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.484500   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.484537   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.484769   60933 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/config.json ...
	I1216 20:59:55.484985   60933 machine.go:93] provisionDockerMachine start ...
	I1216 20:59:55.485008   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:55.485220   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:55.487511   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.487835   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.487862   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.487958   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:55.488134   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:55.488268   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:55.488405   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:55.488546   60933 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:55.488725   60933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I1216 20:59:55.488735   60933 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 20:59:55.596092   60933 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1216 20:59:55.596127   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetMachineName
	I1216 20:59:55.596401   60933 buildroot.go:166] provisioning hostname "old-k8s-version-847766"
	I1216 20:59:55.596426   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetMachineName
	I1216 20:59:55.596644   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:55.599286   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.599631   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.599662   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.599814   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:55.600010   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:55.600166   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:55.600299   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:55.600462   60933 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:55.600665   60933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I1216 20:59:55.600678   60933 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-847766 && echo "old-k8s-version-847766" | sudo tee /etc/hostname
	I1216 20:59:55.731851   60933 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-847766
	
	I1216 20:59:55.731879   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:55.734802   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.735155   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.735186   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.735422   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:55.735650   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:55.735815   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:55.736030   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:55.736194   60933 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:55.736377   60933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I1216 20:59:55.736393   60933 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-847766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-847766/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-847766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 20:59:55.857050   60933 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 20:59:55.857108   60933 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20091-7083/.minikube CaCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20091-7083/.minikube}
	I1216 20:59:55.857138   60933 buildroot.go:174] setting up certificates
	I1216 20:59:55.857163   60933 provision.go:84] configureAuth start
	I1216 20:59:55.857180   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetMachineName
	I1216 20:59:55.857505   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetIP
	I1216 20:59:55.860286   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.860613   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.860643   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.860826   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:55.863292   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.863682   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.863709   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.863871   60933 provision.go:143] copyHostCerts
	I1216 20:59:55.863920   60933 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem, removing ...
	I1216 20:59:55.863929   60933 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem
	I1216 20:59:55.863986   60933 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem (1679 bytes)
	I1216 20:59:55.864069   60933 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem, removing ...
	I1216 20:59:55.864077   60933 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem
	I1216 20:59:55.864104   60933 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem (1082 bytes)
	I1216 20:59:55.864159   60933 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem, removing ...
	I1216 20:59:55.864177   60933 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem
	I1216 20:59:55.864202   60933 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem (1123 bytes)
	I1216 20:59:55.864250   60933 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-847766 san=[127.0.0.1 192.168.72.240 localhost minikube old-k8s-version-847766]
	I1216 20:59:56.058548   60933 provision.go:177] copyRemoteCerts
	I1216 20:59:56.058603   60933 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 20:59:56.058638   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:56.061354   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.061666   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.061707   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.061838   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:56.062039   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.062200   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:56.062356   60933 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa Username:docker}
	I1216 20:59:56.146788   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 20:59:56.172789   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1216 20:59:56.198040   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 20:59:56.222476   60933 provision.go:87] duration metric: took 365.299433ms to configureAuth
	I1216 20:59:56.222505   60933 buildroot.go:189] setting minikube options for container-runtime
	I1216 20:59:56.222706   60933 config.go:182] Loaded profile config "old-k8s-version-847766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1216 20:59:56.222790   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:56.225376   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.225752   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.225779   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.225965   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:56.226182   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.226363   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.226516   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:56.226687   60933 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:56.226887   60933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I1216 20:59:56.226906   60933 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 20:59:56.451434   60933 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 20:59:56.451464   60933 machine.go:96] duration metric: took 966.463181ms to provisionDockerMachine
	I1216 20:59:56.451478   60933 start.go:293] postStartSetup for "old-k8s-version-847766" (driver="kvm2")
	I1216 20:59:56.451513   60933 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 20:59:56.451541   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:56.451926   60933 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 20:59:56.451980   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:56.454840   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.455302   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.455331   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.455454   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:56.455661   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.455814   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:56.455988   60933 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa Username:docker}
	I1216 20:59:56.542904   60933 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 20:59:56.547362   60933 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 20:59:56.547389   60933 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/addons for local assets ...
	I1216 20:59:56.547467   60933 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/files for local assets ...
	I1216 20:59:56.547568   60933 filesync.go:149] local asset: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem -> 142542.pem in /etc/ssl/certs
	I1216 20:59:56.547677   60933 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 20:59:56.557902   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /etc/ssl/certs/142542.pem (1708 bytes)
	I1216 20:59:56.582796   60933 start.go:296] duration metric: took 131.303406ms for postStartSetup
	I1216 20:59:56.582846   60933 fix.go:56] duration metric: took 19.442354832s for fixHost
	I1216 20:59:56.582872   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:56.585478   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.585803   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.585831   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.586011   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:56.586194   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.586358   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.586472   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:56.586640   60933 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:56.586809   60933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I1216 20:59:56.586819   60933 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 20:59:56.696254   60933 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734382796.650794736
	
	I1216 20:59:56.696274   60933 fix.go:216] guest clock: 1734382796.650794736
	I1216 20:59:56.696281   60933 fix.go:229] Guest: 2024-12-16 20:59:56.650794736 +0000 UTC Remote: 2024-12-16 20:59:56.582851742 +0000 UTC m=+262.230512454 (delta=67.942994ms)
	I1216 20:59:56.696299   60933 fix.go:200] guest clock delta is within tolerance: 67.942994ms
	I1216 20:59:56.696304   60933 start.go:83] releasing machines lock for "old-k8s-version-847766", held for 19.555844424s
	I1216 20:59:56.696333   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:56.696608   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetIP
	I1216 20:59:56.699486   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.699932   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.699964   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.700068   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:56.700645   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:56.700846   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:56.700948   60933 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 20:59:56.701007   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:56.701115   60933 ssh_runner.go:195] Run: cat /version.json
	I1216 20:59:56.701140   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:56.703937   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.704117   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.704314   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.704342   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.704496   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:56.704567   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.704601   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.704680   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.704762   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:56.704836   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:56.704982   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.704987   60933 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa Username:docker}
	I1216 20:59:56.705134   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:56.705259   60933 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa Username:docker}
	I1216 20:59:56.784295   60933 ssh_runner.go:195] Run: systemctl --version
	I1216 20:59:56.817481   60933 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 20:59:56.968124   60933 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 20:59:56.979827   60933 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 20:59:56.979892   60933 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 20:59:56.997867   60933 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 20:59:56.997891   60933 start.go:495] detecting cgroup driver to use...
	I1216 20:59:56.997954   60933 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 20:59:57.016064   60933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 20:59:57.031596   60933 docker.go:217] disabling cri-docker service (if available) ...
	I1216 20:59:57.031665   60933 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 20:59:57.047562   60933 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 20:59:57.062737   60933 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 20:59:57.183918   60933 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 20:59:57.354699   60933 docker.go:233] disabling docker service ...
	I1216 20:59:57.354794   60933 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 20:59:57.373311   60933 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 20:59:57.390014   60933 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 20:59:57.523623   60933 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 20:59:57.656261   60933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 20:59:57.671374   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 20:59:57.692647   60933 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1216 20:59:57.692709   60933 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:57.704496   60933 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 20:59:57.704548   60933 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:57.715848   60933 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:57.727022   60933 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:57.738899   60933 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 20:59:57.756457   60933 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 20:59:57.773236   60933 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 20:59:57.773289   60933 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 20:59:57.789209   60933 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 20:59:57.800881   60933 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:59:57.927794   60933 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 20:59:58.038173   60933 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 20:59:58.038256   60933 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 20:59:58.044633   60933 start.go:563] Will wait 60s for crictl version
	I1216 20:59:58.044705   60933 ssh_runner.go:195] Run: which crictl
	I1216 20:59:58.048781   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 20:59:58.088449   60933 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 20:59:58.088579   60933 ssh_runner.go:195] Run: crio --version
	I1216 20:59:58.119211   60933 ssh_runner.go:195] Run: crio --version
	I1216 20:59:58.151411   60933 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1216 20:59:58.152582   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetIP
	I1216 20:59:58.155196   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:58.155558   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:58.155587   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:58.155763   60933 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1216 20:59:58.160369   60933 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 20:59:58.174013   60933 kubeadm.go:883] updating cluster {Name:old-k8s-version-847766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-847766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 20:59:58.174155   60933 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1216 20:59:58.174212   60933 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 20:59:58.226674   60933 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1216 20:59:58.226747   60933 ssh_runner.go:195] Run: which lz4
	I1216 20:59:58.231330   60933 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 20:59:58.236178   60933 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 20:59:58.236214   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1216 20:59:56.721746   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Start
	I1216 20:59:56.721946   60215 main.go:141] libmachine: (embed-certs-606219) Ensuring networks are active...
	I1216 20:59:56.722810   60215 main.go:141] libmachine: (embed-certs-606219) Ensuring network default is active
	I1216 20:59:56.723209   60215 main.go:141] libmachine: (embed-certs-606219) Ensuring network mk-embed-certs-606219 is active
	I1216 20:59:56.723644   60215 main.go:141] libmachine: (embed-certs-606219) Getting domain xml...
	I1216 20:59:56.724387   60215 main.go:141] libmachine: (embed-certs-606219) Creating domain...
	I1216 20:59:58.005906   60215 main.go:141] libmachine: (embed-certs-606219) Waiting to get IP...
	I1216 20:59:58.006646   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 20:59:58.007021   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 20:59:58.007136   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 20:59:58.007017   62108 retry.go:31] will retry after 280.124694ms: waiting for machine to come up
	I1216 20:59:58.288552   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 20:59:58.289049   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 20:59:58.289078   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 20:59:58.289013   62108 retry.go:31] will retry after 299.873899ms: waiting for machine to come up
	I1216 20:59:58.590757   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 20:59:58.591593   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 20:59:58.591625   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 20:59:58.591487   62108 retry.go:31] will retry after 486.884982ms: waiting for machine to come up
	I1216 20:59:59.079996   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 20:59:59.080618   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 20:59:59.080649   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 20:59:59.080581   62108 retry.go:31] will retry after 608.856993ms: waiting for machine to come up
	I1216 20:59:59.691549   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 20:59:59.692107   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 20:59:59.692139   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 20:59:59.692064   62108 retry.go:31] will retry after 730.774006ms: waiting for machine to come up
	I1216 20:59:55.752607   60829 pod_ready.go:103] pod "coredns-668d6bf9bc-tqh9s" in "kube-system" namespace has status "Ready":"False"
	I1216 20:59:58.251902   60829 pod_ready.go:103] pod "coredns-668d6bf9bc-tqh9s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:00.254126   60829 pod_ready.go:103] pod "coredns-668d6bf9bc-tqh9s" in "kube-system" namespace has status "Ready":"False"
	I1216 20:59:58.958114   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 20:59:58.958161   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:59.567722   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": read tcp 192.168.50.1:38738->192.168.50.240:8443: read: connection reset by peer
	I1216 20:59:59.567773   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:59.568271   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": dial tcp 192.168.50.240:8443: connect: connection refused
	I1216 20:59:59.954745   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:59.955447   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": dial tcp 192.168.50.240:8443: connect: connection refused
	I1216 21:00:00.455116   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:00.456036   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": dial tcp 192.168.50.240:8443: connect: connection refused
	I1216 21:00:00.954418   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:00.100507   60933 crio.go:462] duration metric: took 1.869217257s to copy over tarball
	I1216 21:00:00.100619   60933 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 21:00:03.581430   60933 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.480755636s)
	I1216 21:00:03.581469   60933 crio.go:469] duration metric: took 3.480924144s to extract the tarball
	I1216 21:00:03.581478   60933 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 21:00:03.627932   60933 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 21:00:03.667985   60933 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1216 21:00:03.668013   60933 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1216 21:00:03.668078   60933 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 21:00:03.668110   60933 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:03.668207   60933 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:03.668262   60933 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1216 21:00:03.668262   60933 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:03.668332   60933 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1216 21:00:03.668215   60933 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:03.668092   60933 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:03.670096   60933 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:03.670294   60933 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:03.670305   60933 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:03.670305   60933 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:03.670333   60933 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1216 21:00:03.670394   60933 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1216 21:00:03.670396   60933 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 21:00:03.670467   60933 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:03.861573   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1216 21:00:03.869704   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:03.885911   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:03.904748   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1216 21:00:03.905328   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:03.906138   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:03.936548   60933 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1216 21:00:03.936658   60933 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1216 21:00:03.936736   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.019039   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:04.033811   60933 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1216 21:00:04.033863   60933 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:04.033927   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.082946   60933 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1216 21:00:04.082995   60933 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:04.083008   60933 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1216 21:00:04.083050   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.083055   60933 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1216 21:00:04.083063   60933 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1216 21:00:04.083073   60933 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:04.083133   60933 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1216 21:00:04.083203   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1216 21:00:04.083205   60933 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:04.083306   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.083145   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.083139   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.123434   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:04.123702   60933 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1216 21:00:04.123740   60933 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:04.123786   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.150878   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 21:00:04.155586   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:04.155774   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:04.155877   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1216 21:00:04.155968   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:04.156205   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1216 21:00:04.226110   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:04.226429   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:00.424272   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:00.424766   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:00.424795   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:00.424712   62108 retry.go:31] will retry after 947.177724ms: waiting for machine to come up
	I1216 21:00:01.373798   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:01.374448   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:01.374486   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:01.374376   62108 retry.go:31] will retry after 755.735247ms: waiting for machine to come up
	I1216 21:00:02.132092   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:02.132690   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:02.132716   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:02.132636   62108 retry.go:31] will retry after 1.25933291s: waiting for machine to come up
	I1216 21:00:03.393390   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:03.393951   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:03.393987   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:03.393887   62108 retry.go:31] will retry after 1.654271195s: waiting for machine to come up
	I1216 21:00:00.768561   60829 pod_ready.go:93] pod "coredns-668d6bf9bc-tqh9s" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:00.768603   60829 pod_ready.go:82] duration metric: took 9.524968022s for pod "coredns-668d6bf9bc-tqh9s" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:00.768619   60829 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:02.778467   60829 pod_ready.go:93] pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:02.778507   60829 pod_ready.go:82] duration metric: took 2.009878604s for pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:02.778523   60829 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:03.290454   60829 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:03.290490   60829 pod_ready.go:82] duration metric: took 511.956426ms for pod "kube-apiserver-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:03.290505   60829 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:04.533609   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 21:00:04.533639   60421 api_server.go:103] status: https://192.168.50.240:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 21:00:04.533655   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:04.679801   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 21:00:04.679836   60421 api_server.go:103] status: https://192.168.50.240:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 21:00:04.955306   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:05.723827   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 21:00:05.723870   60421 api_server.go:103] status: https://192.168.50.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 21:00:05.723892   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:05.750638   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 21:00:05.750674   60421 api_server.go:103] status: https://192.168.50.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 21:00:05.955092   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:05.983280   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 21:00:05.983332   60421 api_server.go:103] status: https://192.168.50.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 21:00:06.454742   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:06.467886   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 21:00:06.467924   60421 api_server.go:103] status: https://192.168.50.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 21:00:06.954428   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:06.960039   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 200:
	ok
	I1216 21:00:06.969187   60421 api_server.go:141] control plane version: v1.32.0
	I1216 21:00:06.969231   60421 api_server.go:131] duration metric: took 28.515011952s to wait for apiserver health ...
	I1216 21:00:06.969242   60421 cni.go:84] Creating CNI manager for ""
	I1216 21:00:06.969249   60421 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:00:06.971475   60421 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 21:00:06.973035   60421 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 21:00:06.992348   60421 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 21:00:07.020819   60421 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 21:00:07.035254   60421 system_pods.go:59] 8 kube-system pods found
	I1216 21:00:07.035308   60421 system_pods.go:61] "coredns-668d6bf9bc-snhjf" [c0cf42c8-521a-4d02-9d43-ff7a700b0eca] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:00:07.035321   60421 system_pods.go:61] "etcd-no-preload-232338" [01ca2051-5953-44fd-bfff-40aa16ec7aca] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 21:00:07.035335   60421 system_pods.go:61] "kube-apiserver-no-preload-232338" [f1fbbb3b-a0e5-4200-89ef-67085e51a31d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 21:00:07.035359   60421 system_pods.go:61] "kube-controller-manager-no-preload-232338" [200039ad-1a2c-4dc4-8307-d8c882d69f1b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 21:00:07.035373   60421 system_pods.go:61] "kube-proxy-5mw2b" [8fbddf14-8697-451a-a3c7-873fdd437247] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 21:00:07.035382   60421 system_pods.go:61] "kube-scheduler-no-preload-232338" [1b9a7a43-59fc-44ba-9863-04fb90e6554f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 21:00:07.035396   60421 system_pods.go:61] "metrics-server-f79f97bbb-5xf67" [447144e5-11d8-48f7-b2fd-7ab9fb3c04de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:00:07.035409   60421 system_pods.go:61] "storage-provisioner" [fb293bd2-f5be-4086-b821-ffd7df58dd5e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 21:00:07.035420   60421 system_pods.go:74] duration metric: took 14.571089ms to wait for pod list to return data ...
	I1216 21:00:07.035431   60421 node_conditions.go:102] verifying NodePressure condition ...
	I1216 21:00:07.044467   60421 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 21:00:07.044592   60421 node_conditions.go:123] node cpu capacity is 2
	I1216 21:00:07.044633   60421 node_conditions.go:105] duration metric: took 9.191874ms to run NodePressure ...
	I1216 21:00:07.044668   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:07.388388   60421 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1216 21:00:07.394851   60421 kubeadm.go:739] kubelet initialised
	I1216 21:00:07.394881   60421 kubeadm.go:740] duration metric: took 6.459945ms waiting for restarted kubelet to initialise ...
	I1216 21:00:07.394891   60421 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:00:07.401877   60421 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-snhjf" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:07.410697   60421 pod_ready.go:98] node "no-preload-232338" hosting pod "coredns-668d6bf9bc-snhjf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.410732   60421 pod_ready.go:82] duration metric: took 8.80876ms for pod "coredns-668d6bf9bc-snhjf" in "kube-system" namespace to be "Ready" ...
	E1216 21:00:07.410744   60421 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-232338" hosting pod "coredns-668d6bf9bc-snhjf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.410755   60421 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:07.418118   60421 pod_ready.go:98] node "no-preload-232338" hosting pod "etcd-no-preload-232338" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.418149   60421 pod_ready.go:82] duration metric: took 7.383445ms for pod "etcd-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	E1216 21:00:07.418163   60421 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-232338" hosting pod "etcd-no-preload-232338" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.418172   60421 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:07.427341   60421 pod_ready.go:98] node "no-preload-232338" hosting pod "kube-apiserver-no-preload-232338" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.427414   60421 pod_ready.go:82] duration metric: took 9.234588ms for pod "kube-apiserver-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	E1216 21:00:07.427424   60421 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-232338" hosting pod "kube-apiserver-no-preload-232338" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.427432   60421 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:07.435329   60421 pod_ready.go:98] node "no-preload-232338" hosting pod "kube-controller-manager-no-preload-232338" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.435378   60421 pod_ready.go:82] duration metric: took 7.931923ms for pod "kube-controller-manager-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	E1216 21:00:07.435392   60421 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-232338" hosting pod "kube-controller-manager-no-preload-232338" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.435408   60421 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5mw2b" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:04.457220   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:04.457399   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:04.457507   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:04.457596   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1216 21:00:04.457687   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1216 21:00:04.613834   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:04.613870   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:04.613923   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:04.613931   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:04.613960   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:04.613972   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1216 21:00:04.619915   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1216 21:00:04.791265   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1216 21:00:04.791297   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:04.791315   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1216 21:00:04.791352   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1216 21:00:04.791366   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1216 21:00:04.791384   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1216 21:00:04.836463   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1216 21:00:04.836536   60933 cache_images.go:92] duration metric: took 1.168508622s to LoadCachedImages
	W1216 21:00:04.836633   60933 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I1216 21:00:04.836649   60933 kubeadm.go:934] updating node { 192.168.72.240 8443 v1.20.0 crio true true} ...
	I1216 21:00:04.836781   60933 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-847766 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-847766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 21:00:04.836877   60933 ssh_runner.go:195] Run: crio config
	I1216 21:00:04.898330   60933 cni.go:84] Creating CNI manager for ""
	I1216 21:00:04.898357   60933 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:00:04.898371   60933 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 21:00:04.898396   60933 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.240 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-847766 NodeName:old-k8s-version-847766 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1216 21:00:04.898560   60933 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-847766"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.240
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.240"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 21:00:04.898643   60933 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1216 21:00:04.910946   60933 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 21:00:04.911045   60933 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 21:00:04.923199   60933 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1216 21:00:04.942705   60933 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 21:00:04.976598   60933 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1216 21:00:05.001967   60933 ssh_runner.go:195] Run: grep 192.168.72.240	control-plane.minikube.internal$ /etc/hosts
	I1216 21:00:05.006819   60933 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.240	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 21:00:05.020604   60933 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 21:00:05.143039   60933 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 21:00:05.162507   60933 certs.go:68] Setting up /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766 for IP: 192.168.72.240
	I1216 21:00:05.162535   60933 certs.go:194] generating shared ca certs ...
	I1216 21:00:05.162554   60933 certs.go:226] acquiring lock for ca certs: {Name:mk7f8f83a04be3d39897a025f51d4d8228b5a509 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:00:05.162749   60933 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key
	I1216 21:00:05.162792   60933 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key
	I1216 21:00:05.162803   60933 certs.go:256] generating profile certs ...
	I1216 21:00:05.162907   60933 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/client.key
	I1216 21:00:05.162976   60933 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.key.6c8704df
	I1216 21:00:05.163011   60933 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/proxy-client.key
	I1216 21:00:05.163148   60933 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem (1338 bytes)
	W1216 21:00:05.163176   60933 certs.go:480] ignoring /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254_empty.pem, impossibly tiny 0 bytes
	I1216 21:00:05.163186   60933 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 21:00:05.163210   60933 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem (1082 bytes)
	I1216 21:00:05.163278   60933 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem (1123 bytes)
	I1216 21:00:05.163315   60933 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem (1679 bytes)
	I1216 21:00:05.163379   60933 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem (1708 bytes)
	I1216 21:00:05.164216   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 21:00:05.222491   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 21:00:05.253517   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 21:00:05.294338   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 21:00:05.342850   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1216 21:00:05.388068   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 21:00:05.422591   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 21:00:05.471916   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 21:00:05.505836   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem --> /usr/share/ca-certificates/14254.pem (1338 bytes)
	I1216 21:00:05.539404   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /usr/share/ca-certificates/142542.pem (1708 bytes)
	I1216 21:00:05.570819   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 21:00:05.602079   60933 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 21:00:05.630577   60933 ssh_runner.go:195] Run: openssl version
	I1216 21:00:05.640017   60933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142542.pem && ln -fs /usr/share/ca-certificates/142542.pem /etc/ssl/certs/142542.pem"
	I1216 21:00:05.653759   60933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142542.pem
	I1216 21:00:05.659573   60933 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 19:42 /usr/share/ca-certificates/142542.pem
	I1216 21:00:05.659645   60933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142542.pem
	I1216 21:00:05.666667   60933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142542.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 21:00:05.680061   60933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 21:00:05.692776   60933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:00:05.698644   60933 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:00:05.698728   60933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:00:05.705913   60933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 21:00:05.730062   60933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14254.pem && ln -fs /usr/share/ca-certificates/14254.pem /etc/ssl/certs/14254.pem"
	I1216 21:00:05.750034   60933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14254.pem
	I1216 21:00:05.757158   60933 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 19:42 /usr/share/ca-certificates/14254.pem
	I1216 21:00:05.757252   60933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14254.pem
	I1216 21:00:05.765679   60933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14254.pem /etc/ssl/certs/51391683.0"
	I1216 21:00:05.782537   60933 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 21:00:05.788291   60933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 21:00:05.797707   60933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 21:00:05.807016   60933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 21:00:05.818160   60933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 21:00:05.827428   60933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 21:00:05.836499   60933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 21:00:05.846104   60933 kubeadm.go:392] StartCluster: {Name:old-k8s-version-847766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-847766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 21:00:05.846244   60933 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 21:00:05.846331   60933 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 21:00:05.901274   60933 cri.go:89] found id: ""
	I1216 21:00:05.901376   60933 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 21:00:05.917353   60933 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1216 21:00:05.917381   60933 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1216 21:00:05.917439   60933 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 21:00:05.928587   60933 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 21:00:05.932546   60933 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-847766" does not appear in /home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 21:00:05.933844   60933 kubeconfig.go:62] /home/jenkins/minikube-integration/20091-7083/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-847766" cluster setting kubeconfig missing "old-k8s-version-847766" context setting]
	I1216 21:00:05.935400   60933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/kubeconfig: {Name:mk67073c6dc9abd712825d4490d6430745897f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:00:05.938054   60933 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 21:00:05.950384   60933 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.240
	I1216 21:00:05.950433   60933 kubeadm.go:1160] stopping kube-system containers ...
	I1216 21:00:05.950450   60933 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 21:00:05.950515   60933 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 21:00:05.999495   60933 cri.go:89] found id: ""
	I1216 21:00:05.999588   60933 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 21:00:06.024765   60933 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:00:06.037807   60933 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:00:06.037836   60933 kubeadm.go:157] found existing configuration files:
	
	I1216 21:00:06.037894   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 21:00:06.048926   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:00:06.048997   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:00:06.060167   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 21:00:06.070841   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:00:06.070910   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:00:06.083517   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 21:00:06.099124   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:00:06.099214   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:00:06.110004   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 21:00:06.125600   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:00:06.125668   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:00:06.137212   60933 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 21:00:06.148873   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:06.316611   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:07.220187   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:07.549730   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:07.698864   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:07.815495   60933 api_server.go:52] waiting for apiserver process to appear ...
	I1216 21:00:07.815657   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:08.316003   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:08.816482   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:09.315805   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:05.050699   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:05.051378   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:05.051413   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:05.051296   62108 retry.go:31] will retry after 2.184829789s: waiting for machine to come up
	I1216 21:00:07.237618   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:07.238137   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:07.238166   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:07.238049   62108 retry.go:31] will retry after 2.531717629s: waiting for machine to come up
	I1216 21:00:05.713060   60829 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:05.798544   60829 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:05.798569   60829 pod_ready.go:82] duration metric: took 2.508055323s for pod "kube-controller-manager-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:05.798582   60829 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-mplxr" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:05.805322   60829 pod_ready.go:93] pod "kube-proxy-mplxr" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:05.805361   60829 pod_ready.go:82] duration metric: took 6.77ms for pod "kube-proxy-mplxr" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:05.805399   60829 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:05.812700   60829 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:05.812727   60829 pod_ready.go:82] duration metric: took 7.281992ms for pod "kube-scheduler-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:05.812741   60829 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:07.822004   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:10.321160   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:09.443582   60421 pod_ready.go:103] pod "kube-proxy-5mw2b" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:11.443796   60421 pod_ready.go:103] pod "kube-proxy-5mw2b" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:09.815863   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:10.316664   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:10.815852   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:11.316175   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:11.816446   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:12.316040   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:12.816172   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:13.316460   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:13.815700   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:14.316469   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:09.772318   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:09.772837   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:09.772869   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:09.772797   62108 retry.go:31] will retry after 2.557982234s: waiting for machine to come up
	I1216 21:00:12.331877   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:12.332340   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:12.332368   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:12.332298   62108 retry.go:31] will retry after 4.202991569s: waiting for machine to come up
	I1216 21:00:12.322897   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:14.323015   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:13.942154   60421 pod_ready.go:103] pod "kube-proxy-5mw2b" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:16.442411   60421 pod_ready.go:103] pod "kube-proxy-5mw2b" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:14.816539   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:15.315737   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:15.816465   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:16.316470   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:16.816451   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:17.316485   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:17.816470   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:18.316165   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:18.816448   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:19.315972   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:16.539792   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.540299   60215 main.go:141] libmachine: (embed-certs-606219) Found IP for machine: 192.168.61.151
	I1216 21:00:16.540324   60215 main.go:141] libmachine: (embed-certs-606219) Reserving static IP address...
	I1216 21:00:16.540341   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has current primary IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.540771   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "embed-certs-606219", mac: "52:54:00:63:37:8f", ip: "192.168.61.151"} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:16.540810   60215 main.go:141] libmachine: (embed-certs-606219) DBG | skip adding static IP to network mk-embed-certs-606219 - found existing host DHCP lease matching {name: "embed-certs-606219", mac: "52:54:00:63:37:8f", ip: "192.168.61.151"}
	I1216 21:00:16.540827   60215 main.go:141] libmachine: (embed-certs-606219) Reserved static IP address: 192.168.61.151
	I1216 21:00:16.540839   60215 main.go:141] libmachine: (embed-certs-606219) Waiting for SSH to be available...
	I1216 21:00:16.540847   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Getting to WaitForSSH function...
	I1216 21:00:16.542958   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.543461   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:16.543503   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.543629   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Using SSH client type: external
	I1216 21:00:16.543663   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Using SSH private key: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa (-rw-------)
	I1216 21:00:16.543696   60215 main.go:141] libmachine: (embed-certs-606219) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.151 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1216 21:00:16.543713   60215 main.go:141] libmachine: (embed-certs-606219) DBG | About to run SSH command:
	I1216 21:00:16.543732   60215 main.go:141] libmachine: (embed-certs-606219) DBG | exit 0
	I1216 21:00:16.671576   60215 main.go:141] libmachine: (embed-certs-606219) DBG | SSH cmd err, output: <nil>: 
	I1216 21:00:16.671965   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetConfigRaw
	I1216 21:00:16.672599   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetIP
	I1216 21:00:16.675179   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.675520   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:16.675549   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.675726   60215 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/config.json ...
	I1216 21:00:16.675938   60215 machine.go:93] provisionDockerMachine start ...
	I1216 21:00:16.675955   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:00:16.676186   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:16.678481   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.678824   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:16.678846   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.679020   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:16.679203   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:16.679388   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:16.679530   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:16.679689   60215 main.go:141] libmachine: Using SSH client type: native
	I1216 21:00:16.679883   60215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I1216 21:00:16.679896   60215 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 21:00:16.791925   60215 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1216 21:00:16.791959   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetMachineName
	I1216 21:00:16.792224   60215 buildroot.go:166] provisioning hostname "embed-certs-606219"
	I1216 21:00:16.792261   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetMachineName
	I1216 21:00:16.792492   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:16.794967   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.795359   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:16.795388   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.795496   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:16.795674   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:16.795845   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:16.795995   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:16.796238   60215 main.go:141] libmachine: Using SSH client type: native
	I1216 21:00:16.796466   60215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I1216 21:00:16.796486   60215 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-606219 && echo "embed-certs-606219" | sudo tee /etc/hostname
	I1216 21:00:16.923887   60215 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-606219
	
	I1216 21:00:16.923922   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:16.926689   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.927228   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:16.927283   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.927500   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:16.927724   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:16.927943   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:16.928139   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:16.928396   60215 main.go:141] libmachine: Using SSH client type: native
	I1216 21:00:16.928574   60215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I1216 21:00:16.928590   60215 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-606219' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-606219/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-606219' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 21:00:17.045462   60215 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 21:00:17.045508   60215 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20091-7083/.minikube CaCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20091-7083/.minikube}
	I1216 21:00:17.045540   60215 buildroot.go:174] setting up certificates
	I1216 21:00:17.045560   60215 provision.go:84] configureAuth start
	I1216 21:00:17.045578   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetMachineName
	I1216 21:00:17.045889   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetIP
	I1216 21:00:17.048733   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.049038   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:17.049062   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.049216   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:17.051371   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.051713   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:17.051748   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.051861   60215 provision.go:143] copyHostCerts
	I1216 21:00:17.051940   60215 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem, removing ...
	I1216 21:00:17.051954   60215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem
	I1216 21:00:17.052033   60215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem (1082 bytes)
	I1216 21:00:17.052187   60215 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem, removing ...
	I1216 21:00:17.052203   60215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem
	I1216 21:00:17.052230   60215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem (1123 bytes)
	I1216 21:00:17.052306   60215 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem, removing ...
	I1216 21:00:17.052317   60215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem
	I1216 21:00:17.052342   60215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem (1679 bytes)
	I1216 21:00:17.052413   60215 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem org=jenkins.embed-certs-606219 san=[127.0.0.1 192.168.61.151 embed-certs-606219 localhost minikube]
	I1216 21:00:17.345020   60215 provision.go:177] copyRemoteCerts
	I1216 21:00:17.345079   60215 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 21:00:17.345116   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:17.348019   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.348323   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:17.348350   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.348554   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:17.348783   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:17.348931   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:17.349093   60215 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 21:00:17.434520   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 21:00:17.462097   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1216 21:00:17.488071   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 21:00:17.516428   60215 provision.go:87] duration metric: took 470.851303ms to configureAuth
	I1216 21:00:17.516461   60215 buildroot.go:189] setting minikube options for container-runtime
	I1216 21:00:17.516673   60215 config.go:182] Loaded profile config "embed-certs-606219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 21:00:17.516763   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:17.519637   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.519981   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:17.520019   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.520229   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:17.520451   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:17.520654   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:17.520813   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:17.520977   60215 main.go:141] libmachine: Using SSH client type: native
	I1216 21:00:17.521148   60215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I1216 21:00:17.521166   60215 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 21:00:17.787052   60215 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 21:00:17.787084   60215 machine.go:96] duration metric: took 1.111132885s to provisionDockerMachine
	I1216 21:00:17.787111   60215 start.go:293] postStartSetup for "embed-certs-606219" (driver="kvm2")
	I1216 21:00:17.787126   60215 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 21:00:17.787145   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:00:17.787551   60215 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 21:00:17.787588   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:17.790332   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.790710   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:17.790743   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.790891   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:17.791130   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:17.791336   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:17.791492   60215 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 21:00:17.881548   60215 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 21:00:17.886692   60215 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 21:00:17.886720   60215 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/addons for local assets ...
	I1216 21:00:17.886788   60215 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/files for local assets ...
	I1216 21:00:17.886886   60215 filesync.go:149] local asset: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem -> 142542.pem in /etc/ssl/certs
	I1216 21:00:17.886983   60215 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 21:00:17.897832   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /etc/ssl/certs/142542.pem (1708 bytes)
	I1216 21:00:17.926273   60215 start.go:296] duration metric: took 139.147156ms for postStartSetup
	I1216 21:00:17.926316   60215 fix.go:56] duration metric: took 21.229856025s for fixHost
	I1216 21:00:17.926338   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:17.929204   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.929600   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:17.929623   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.929809   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:17.930036   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:17.930220   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:17.930411   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:17.930554   60215 main.go:141] libmachine: Using SSH client type: native
	I1216 21:00:17.930723   60215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I1216 21:00:17.930734   60215 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 21:00:18.040530   60215 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734382817.988837134
	
	I1216 21:00:18.040557   60215 fix.go:216] guest clock: 1734382817.988837134
	I1216 21:00:18.040590   60215 fix.go:229] Guest: 2024-12-16 21:00:17.988837134 +0000 UTC Remote: 2024-12-16 21:00:17.926320778 +0000 UTC m=+358.266755361 (delta=62.516356ms)
	I1216 21:00:18.040639   60215 fix.go:200] guest clock delta is within tolerance: 62.516356ms
	I1216 21:00:18.040650   60215 start.go:83] releasing machines lock for "embed-certs-606219", held for 21.34422537s
	I1216 21:00:18.040682   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:00:18.040997   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetIP
	I1216 21:00:18.044100   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:18.044549   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:18.044584   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:18.044727   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:00:18.045237   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:00:18.045454   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:00:18.045544   60215 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 21:00:18.045602   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:18.045673   60215 ssh_runner.go:195] Run: cat /version.json
	I1216 21:00:18.045702   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:18.048852   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:18.049066   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:18.049259   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:18.049285   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:18.049423   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:18.049578   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:18.049610   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:18.049611   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:18.049688   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:18.049885   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:18.049908   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:18.050090   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:18.050082   60215 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 21:00:18.050313   60215 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 21:00:18.128381   60215 ssh_runner.go:195] Run: systemctl --version
	I1216 21:00:18.165162   60215 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 21:00:18.313679   60215 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 21:00:18.321330   60215 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 21:00:18.321407   60215 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 21:00:18.340577   60215 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 21:00:18.340601   60215 start.go:495] detecting cgroup driver to use...
	I1216 21:00:18.340672   60215 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 21:00:18.357273   60215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 21:00:18.373169   60215 docker.go:217] disabling cri-docker service (if available) ...
	I1216 21:00:18.373231   60215 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 21:00:18.387904   60215 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 21:00:18.402499   60215 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 21:00:18.528830   60215 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 21:00:18.677746   60215 docker.go:233] disabling docker service ...
	I1216 21:00:18.677839   60215 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 21:00:18.693059   60215 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 21:00:18.707368   60215 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 21:00:18.870936   60215 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 21:00:19.011321   60215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 21:00:19.025645   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 21:00:19.045618   60215 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1216 21:00:19.045695   60215 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:00:19.056739   60215 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 21:00:19.056813   60215 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:00:19.067975   60215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:00:19.078954   60215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:00:19.090165   60215 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 21:00:19.101906   60215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:00:19.112949   60215 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:00:19.131186   60215 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:00:19.142238   60215 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 21:00:19.152768   60215 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 21:00:19.152830   60215 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 21:00:19.169166   60215 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 21:00:19.188991   60215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 21:00:19.319083   60215 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 21:00:19.427266   60215 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 21:00:19.427377   60215 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 21:00:19.432716   60215 start.go:563] Will wait 60s for crictl version
	I1216 21:00:19.432793   60215 ssh_runner.go:195] Run: which crictl
	I1216 21:00:19.437514   60215 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 21:00:19.484613   60215 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 21:00:19.484726   60215 ssh_runner.go:195] Run: crio --version
	I1216 21:00:19.519451   60215 ssh_runner.go:195] Run: crio --version
	I1216 21:00:19.555298   60215 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1216 21:00:19.556696   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetIP
	I1216 21:00:19.559802   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:19.560178   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:19.560201   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:19.560467   60215 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1216 21:00:19.565180   60215 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 21:00:19.579863   60215 kubeadm.go:883] updating cluster {Name:embed-certs-606219 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.32.0 ClusterName:embed-certs-606219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.151 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 21:00:19.579991   60215 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 21:00:19.580037   60215 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 21:00:19.618480   60215 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1216 21:00:19.618556   60215 ssh_runner.go:195] Run: which lz4
	I1216 21:00:19.622839   60215 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 21:00:19.627438   60215 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 21:00:19.627482   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1216 21:00:16.819610   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:19.326427   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:17.942107   60421 pod_ready.go:93] pod "kube-proxy-5mw2b" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:17.942148   60421 pod_ready.go:82] duration metric: took 10.506728599s for pod "kube-proxy-5mw2b" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:17.942161   60421 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:17.948518   60421 pod_ready.go:93] pod "kube-scheduler-no-preload-232338" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:17.948540   60421 pod_ready.go:82] duration metric: took 6.372903ms for pod "kube-scheduler-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:17.948549   60421 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:19.956992   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:21.957271   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:19.815807   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:20.316465   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:20.816461   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:21.316731   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:21.816637   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:22.315727   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:22.816447   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:23.316510   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:23.816408   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:24.316454   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:21.237863   60215 crio.go:462] duration metric: took 1.615059209s to copy over tarball
	I1216 21:00:21.237956   60215 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 21:00:23.572502   60215 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.33450798s)
	I1216 21:00:23.572535   60215 crio.go:469] duration metric: took 2.334633133s to extract the tarball
	I1216 21:00:23.572549   60215 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 21:00:23.613530   60215 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 21:00:23.667777   60215 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 21:00:23.667807   60215 cache_images.go:84] Images are preloaded, skipping loading
	I1216 21:00:23.667815   60215 kubeadm.go:934] updating node { 192.168.61.151 8443 v1.32.0 crio true true} ...
	I1216 21:00:23.667929   60215 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-606219 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.151
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:embed-certs-606219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 21:00:23.668009   60215 ssh_runner.go:195] Run: crio config
	I1216 21:00:23.716162   60215 cni.go:84] Creating CNI manager for ""
	I1216 21:00:23.716184   60215 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:00:23.716192   60215 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 21:00:23.716211   60215 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.151 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-606219 NodeName:embed-certs-606219 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.151"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.151 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 21:00:23.716337   60215 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.151
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-606219"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.151"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.151"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 21:00:23.716393   60215 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1216 21:00:23.727236   60215 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 21:00:23.727337   60215 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 21:00:23.737632   60215 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1216 21:00:23.757380   60215 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 21:00:23.774863   60215 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I1216 21:00:23.795070   60215 ssh_runner.go:195] Run: grep 192.168.61.151	control-plane.minikube.internal$ /etc/hosts
	I1216 21:00:23.799453   60215 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.151	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 21:00:23.814278   60215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 21:00:23.962200   60215 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 21:00:23.981947   60215 certs.go:68] Setting up /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219 for IP: 192.168.61.151
	I1216 21:00:23.981976   60215 certs.go:194] generating shared ca certs ...
	I1216 21:00:23.981999   60215 certs.go:226] acquiring lock for ca certs: {Name:mk7f8f83a04be3d39897a025f51d4d8228b5a509 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:00:23.982156   60215 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key
	I1216 21:00:23.982197   60215 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key
	I1216 21:00:23.982204   60215 certs.go:256] generating profile certs ...
	I1216 21:00:23.982280   60215 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/client.key
	I1216 21:00:23.982336   60215 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/apiserver.key.b346be49
	I1216 21:00:23.982376   60215 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/proxy-client.key
	I1216 21:00:23.982483   60215 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem (1338 bytes)
	W1216 21:00:23.982513   60215 certs.go:480] ignoring /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254_empty.pem, impossibly tiny 0 bytes
	I1216 21:00:23.982523   60215 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 21:00:23.982555   60215 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem (1082 bytes)
	I1216 21:00:23.982582   60215 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem (1123 bytes)
	I1216 21:00:23.982602   60215 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem (1679 bytes)
	I1216 21:00:23.982655   60215 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem (1708 bytes)
	I1216 21:00:23.983524   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 21:00:24.015369   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 21:00:24.043889   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 21:00:24.087807   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 21:00:24.137438   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1216 21:00:24.174859   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 21:00:24.200220   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 21:00:24.225811   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 21:00:24.251567   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /usr/share/ca-certificates/142542.pem (1708 bytes)
	I1216 21:00:24.276737   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 21:00:24.302541   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem --> /usr/share/ca-certificates/14254.pem (1338 bytes)
	I1216 21:00:24.329876   60215 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 21:00:24.350133   60215 ssh_runner.go:195] Run: openssl version
	I1216 21:00:24.356984   60215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142542.pem && ln -fs /usr/share/ca-certificates/142542.pem /etc/ssl/certs/142542.pem"
	I1216 21:00:24.371219   60215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142542.pem
	I1216 21:00:24.376759   60215 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 19:42 /usr/share/ca-certificates/142542.pem
	I1216 21:00:24.376816   60215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142542.pem
	I1216 21:00:24.383725   60215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142542.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 21:00:24.397759   60215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 21:00:24.409836   60215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:00:24.414765   60215 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:00:24.414836   60215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:00:24.421662   60215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 21:00:24.433843   60215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14254.pem && ln -fs /usr/share/ca-certificates/14254.pem /etc/ssl/certs/14254.pem"
	I1216 21:00:24.447839   60215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14254.pem
	I1216 21:00:24.453107   60215 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 19:42 /usr/share/ca-certificates/14254.pem
	I1216 21:00:24.453185   60215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14254.pem
	I1216 21:00:24.459472   60215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14254.pem /etc/ssl/certs/51391683.0"
	I1216 21:00:24.471714   60215 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 21:00:24.476881   60215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 21:00:24.486263   60215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 21:00:24.493146   60215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 21:00:24.500093   60215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 21:00:24.506599   60215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 21:00:24.512946   60215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 21:00:24.519699   60215 kubeadm.go:392] StartCluster: {Name:embed-certs-606219 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32
.0 ClusterName:embed-certs-606219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.151 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 21:00:24.519780   60215 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 21:00:24.519861   60215 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 21:00:24.570867   60215 cri.go:89] found id: ""
	I1216 21:00:24.570952   60215 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 21:00:24.583857   60215 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1216 21:00:24.583887   60215 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1216 21:00:24.583943   60215 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 21:00:24.595709   60215 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 21:00:24.596734   60215 kubeconfig.go:125] found "embed-certs-606219" server: "https://192.168.61.151:8443"
	I1216 21:00:24.598569   60215 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 21:00:24.609876   60215 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.151
	I1216 21:00:24.609905   60215 kubeadm.go:1160] stopping kube-system containers ...
	I1216 21:00:24.609917   60215 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 21:00:24.609964   60215 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 21:00:24.654487   60215 cri.go:89] found id: ""
	I1216 21:00:24.654567   60215 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 21:00:24.676658   60215 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:00:24.689546   60215 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:00:24.689571   60215 kubeadm.go:157] found existing configuration files:
	
	I1216 21:00:24.689615   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 21:00:21.819876   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:23.820061   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:23.957368   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:26.556301   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:24.816467   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:25.315789   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:25.816410   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:26.316537   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:26.816144   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:27.316659   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:27.816126   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:28.316568   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:28.816151   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:29.316485   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:24.700928   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:00:24.701012   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:00:24.713438   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 21:00:24.725184   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:00:24.725257   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:00:24.737483   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 21:00:24.749488   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:00:24.749546   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:00:24.762322   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 21:00:24.774309   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:00:24.774391   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:00:24.787008   60215 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 21:00:24.798394   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:25.009799   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:25.917432   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:26.175602   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:26.279646   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:26.362472   60215 api_server.go:52] waiting for apiserver process to appear ...
	I1216 21:00:26.362564   60215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:26.862646   60215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:27.362663   60215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:27.421335   60215 api_server.go:72] duration metric: took 1.058863872s to wait for apiserver process to appear ...
	I1216 21:00:27.421361   60215 api_server.go:88] waiting for apiserver healthz status ...
	I1216 21:00:27.421380   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:00:27.421869   60215 api_server.go:269] stopped: https://192.168.61.151:8443/healthz: Get "https://192.168.61.151:8443/healthz": dial tcp 192.168.61.151:8443: connect: connection refused
	I1216 21:00:27.921493   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:00:26.471175   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:28.819200   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:30.365380   60215 api_server.go:279] https://192.168.61.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 21:00:30.365410   60215 api_server.go:103] status: https://192.168.61.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 21:00:30.365425   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:00:30.416044   60215 api_server.go:279] https://192.168.61.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 21:00:30.416078   60215 api_server.go:103] status: https://192.168.61.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 21:00:30.422219   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:00:30.432135   60215 api_server.go:279] https://192.168.61.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 21:00:30.432161   60215 api_server.go:103] status: https://192.168.61.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 21:00:30.921790   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:00:30.929160   60215 api_server.go:279] https://192.168.61.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 21:00:30.929192   60215 api_server.go:103] status: https://192.168.61.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 21:00:31.421708   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:00:31.432805   60215 api_server.go:279] https://192.168.61.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 21:00:31.432839   60215 api_server.go:103] status: https://192.168.61.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 21:00:31.922000   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:00:31.933658   60215 api_server.go:279] https://192.168.61.151:8443/healthz returned 200:
	ok
	I1216 21:00:31.945496   60215 api_server.go:141] control plane version: v1.32.0
	I1216 21:00:31.945534   60215 api_server.go:131] duration metric: took 4.524165612s to wait for apiserver health ...
	I1216 21:00:31.945546   60215 cni.go:84] Creating CNI manager for ""
	I1216 21:00:31.945555   60215 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:00:31.947456   60215 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 21:00:28.954572   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:30.955397   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:29.816510   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:30.315756   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:30.815774   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:31.316516   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:31.816503   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:32.316499   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:32.816455   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:33.316478   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:33.816363   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:34.316057   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:31.948727   60215 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 21:00:31.977877   60215 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 21:00:32.014745   60215 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 21:00:32.027268   60215 system_pods.go:59] 8 kube-system pods found
	I1216 21:00:32.027303   60215 system_pods.go:61] "coredns-668d6bf9bc-rp29f" [0135dcef-2324-49ec-b459-f34b73efd82b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:00:32.027311   60215 system_pods.go:61] "etcd-embed-certs-606219" [05f01ef3-5d92-4d16-9643-0f56df3869f6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 21:00:32.027320   60215 system_pods.go:61] "kube-apiserver-embed-certs-606219" [4294c469-e47a-4722-a620-92c33d23b41e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 21:00:32.027326   60215 system_pods.go:61] "kube-controller-manager-embed-certs-606219" [cc8452e6-ca00-44dd-8d77-897df20d37f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 21:00:32.027354   60215 system_pods.go:61] "kube-proxy-8t495" [492be5cc-7d3a-4983-9bc7-14091bef7b43] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 21:00:32.027362   60215 system_pods.go:61] "kube-scheduler-embed-certs-606219" [63c42d73-a17a-4b37-a585-f7db5923c493] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 21:00:32.027376   60215 system_pods.go:61] "metrics-server-f79f97bbb-d6gmd" [50916d48-ee33-4e96-9507-c486d8ac7f7d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:00:32.027387   60215 system_pods.go:61] "storage-provisioner" [1164651f-c3b5-445f-882a-60eb2f2fb3f8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 21:00:32.027399   60215 system_pods.go:74] duration metric: took 12.633182ms to wait for pod list to return data ...
	I1216 21:00:32.027409   60215 node_conditions.go:102] verifying NodePressure condition ...
	I1216 21:00:32.041648   60215 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 21:00:32.041677   60215 node_conditions.go:123] node cpu capacity is 2
	I1216 21:00:32.041686   60215 node_conditions.go:105] duration metric: took 14.27317ms to run NodePressure ...
	I1216 21:00:32.041704   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:32.492472   60215 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1216 21:00:32.504237   60215 kubeadm.go:739] kubelet initialised
	I1216 21:00:32.504271   60215 kubeadm.go:740] duration metric: took 11.772175ms waiting for restarted kubelet to initialise ...
	I1216 21:00:32.504282   60215 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:00:32.525531   60215 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:34.531954   60215 pod_ready.go:103] pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:31.321998   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:33.325288   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:32.959143   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:35.454928   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:37.455474   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:34.815839   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:35.316503   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:35.816590   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:36.316231   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:36.816011   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:37.316485   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:37.816494   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:38.316486   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:38.816475   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:39.315762   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:36.534516   60215 pod_ready.go:103] pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:39.032255   60215 pod_ready.go:103] pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:35.819575   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:38.322139   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:40.322804   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:39.456089   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:41.955128   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:39.816009   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:40.316444   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:40.816493   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:41.315869   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:41.816495   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:42.316034   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:42.816422   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:43.316432   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:43.815875   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:44.316036   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:41.032545   60215 pod_ready.go:103] pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:43.534471   60215 pod_ready.go:103] pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:42.819610   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:44.820561   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:43.955190   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:46.455540   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:44.816293   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:45.316458   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:45.815992   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:46.316054   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:46.816449   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:47.316113   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:47.816514   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:48.316353   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:48.816144   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:49.316435   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:45.031682   60215 pod_ready.go:93] pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:45.031705   60215 pod_ready.go:82] duration metric: took 12.506146086s for pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.031715   60215 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.038109   60215 pod_ready.go:93] pod "etcd-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:45.038138   60215 pod_ready.go:82] duration metric: took 6.416609ms for pod "etcd-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.038149   60215 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.043764   60215 pod_ready.go:93] pod "kube-apiserver-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:45.043784   60215 pod_ready.go:82] duration metric: took 5.621982ms for pod "kube-apiserver-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.043793   60215 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.053376   60215 pod_ready.go:93] pod "kube-controller-manager-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:45.053399   60215 pod_ready.go:82] duration metric: took 9.600142ms for pod "kube-controller-manager-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.053409   60215 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-8t495" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.058956   60215 pod_ready.go:93] pod "kube-proxy-8t495" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:45.058976   60215 pod_ready.go:82] duration metric: took 5.561188ms for pod "kube-proxy-8t495" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.058984   60215 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.429908   60215 pod_ready.go:93] pod "kube-scheduler-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:45.429932   60215 pod_ready.go:82] duration metric: took 370.942192ms for pod "kube-scheduler-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.429942   60215 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:47.438759   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:47.323605   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:49.819763   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:48.456270   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:50.955190   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:49.815935   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:50.316437   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:50.816335   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:51.315747   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:51.816504   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:52.315695   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:52.816115   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:53.316498   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:53.816529   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:54.315689   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:49.935961   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:51.937245   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:53.937302   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:51.820266   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:53.820748   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:52.956645   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:55.456064   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:54.816019   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:55.316484   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:55.816517   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:56.315858   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:56.816306   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:57.316447   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:57.815879   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:58.316493   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:58.816395   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:59.316225   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:56.437390   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:58.938617   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:56.323619   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:58.820330   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:57.956401   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:00.456844   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:02.457677   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:59.816440   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:00.315769   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:00.816285   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:01.316020   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:01.818175   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:02.315780   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:02.816411   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:03.315758   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:03.815810   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:04.316731   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:01.436856   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:03.436945   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:00.820484   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:03.323328   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:04.955714   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:07.455361   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:04.816470   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:05.316528   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:05.815792   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:06.316491   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:06.815977   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:07.316002   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:07.816043   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:07.816114   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:07.861866   60933 cri.go:89] found id: ""
	I1216 21:01:07.861896   60933 logs.go:282] 0 containers: []
	W1216 21:01:07.861906   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:07.861913   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:07.861978   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:07.905674   60933 cri.go:89] found id: ""
	I1216 21:01:07.905700   60933 logs.go:282] 0 containers: []
	W1216 21:01:07.905707   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:07.905713   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:07.905798   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:07.949936   60933 cri.go:89] found id: ""
	I1216 21:01:07.949966   60933 logs.go:282] 0 containers: []
	W1216 21:01:07.949977   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:07.949984   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:07.950048   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:07.987196   60933 cri.go:89] found id: ""
	I1216 21:01:07.987223   60933 logs.go:282] 0 containers: []
	W1216 21:01:07.987232   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:07.987237   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:07.987341   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:08.033126   60933 cri.go:89] found id: ""
	I1216 21:01:08.033156   60933 logs.go:282] 0 containers: []
	W1216 21:01:08.033168   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:08.033176   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:08.033252   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:08.072223   60933 cri.go:89] found id: ""
	I1216 21:01:08.072257   60933 logs.go:282] 0 containers: []
	W1216 21:01:08.072270   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:08.072278   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:08.072345   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:08.117257   60933 cri.go:89] found id: ""
	I1216 21:01:08.117288   60933 logs.go:282] 0 containers: []
	W1216 21:01:08.117299   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:08.117319   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:08.117389   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:08.158059   60933 cri.go:89] found id: ""
	I1216 21:01:08.158096   60933 logs.go:282] 0 containers: []
	W1216 21:01:08.158106   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:08.158119   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:08.158133   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:08.232930   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:08.232966   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:08.277173   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:08.277204   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:08.331763   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:08.331802   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:08.346150   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:08.346178   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:08.488668   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:05.437627   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:07.938294   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:05.820491   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:07.821058   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:10.322630   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:09.456101   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:11.461923   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:10.989383   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:11.003162   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:11.003266   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:11.040432   60933 cri.go:89] found id: ""
	I1216 21:01:11.040464   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.040475   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:11.040483   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:11.040547   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:11.083083   60933 cri.go:89] found id: ""
	I1216 21:01:11.083110   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.083117   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:11.083122   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:11.083182   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:11.122842   60933 cri.go:89] found id: ""
	I1216 21:01:11.122880   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.122893   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:11.122900   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:11.122969   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:11.168227   60933 cri.go:89] found id: ""
	I1216 21:01:11.168268   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.168279   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:11.168286   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:11.168359   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:11.218660   60933 cri.go:89] found id: ""
	I1216 21:01:11.218689   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.218701   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:11.218708   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:11.218774   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:11.281179   60933 cri.go:89] found id: ""
	I1216 21:01:11.281214   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.281227   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:11.281236   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:11.281315   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:11.326419   60933 cri.go:89] found id: ""
	I1216 21:01:11.326453   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.326464   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:11.326472   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:11.326535   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:11.368825   60933 cri.go:89] found id: ""
	I1216 21:01:11.368863   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.368875   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:11.368887   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:11.368905   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:11.454848   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:11.454869   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:11.454888   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:11.541685   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:11.541724   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:11.581804   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:11.581830   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:11.635800   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:11.635838   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:14.152441   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:14.167637   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:14.167720   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:14.206685   60933 cri.go:89] found id: ""
	I1216 21:01:14.206716   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.206728   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:14.206735   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:14.206796   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:14.248126   60933 cri.go:89] found id: ""
	I1216 21:01:14.248151   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.248159   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:14.248165   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:14.248215   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:14.285030   60933 cri.go:89] found id: ""
	I1216 21:01:14.285067   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.285079   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:14.285086   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:14.285151   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:14.325706   60933 cri.go:89] found id: ""
	I1216 21:01:14.325736   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.325747   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:14.325755   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:14.325820   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:14.369447   60933 cri.go:89] found id: ""
	I1216 21:01:14.369475   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.369486   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:14.369494   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:14.369557   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:10.437872   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:12.937013   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:12.820480   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:15.319910   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:13.959919   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:16.458101   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:14.407792   60933 cri.go:89] found id: ""
	I1216 21:01:14.407818   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.407826   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:14.407832   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:14.407890   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:14.448380   60933 cri.go:89] found id: ""
	I1216 21:01:14.448411   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.448419   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:14.448424   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:14.448473   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:14.487116   60933 cri.go:89] found id: ""
	I1216 21:01:14.487144   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.487154   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:14.487164   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:14.487177   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:14.547342   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:14.547390   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:14.563385   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:14.563424   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:14.637363   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:14.637394   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:14.637410   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:14.715586   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:14.715626   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:17.258974   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:17.273896   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:17.273970   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:17.317359   60933 cri.go:89] found id: ""
	I1216 21:01:17.317394   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.317405   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:17.317412   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:17.317476   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:17.361422   60933 cri.go:89] found id: ""
	I1216 21:01:17.361451   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.361462   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:17.361469   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:17.361568   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:17.401466   60933 cri.go:89] found id: ""
	I1216 21:01:17.401522   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.401534   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:17.401544   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:17.401614   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:17.439560   60933 cri.go:89] found id: ""
	I1216 21:01:17.439588   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.439597   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:17.439603   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:17.439655   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:17.480310   60933 cri.go:89] found id: ""
	I1216 21:01:17.480333   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.480340   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:17.480345   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:17.480393   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:17.528562   60933 cri.go:89] found id: ""
	I1216 21:01:17.528589   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.528600   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:17.528607   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:17.528671   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:17.569863   60933 cri.go:89] found id: ""
	I1216 21:01:17.569900   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.569908   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:17.569914   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:17.569975   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:17.610840   60933 cri.go:89] found id: ""
	I1216 21:01:17.610867   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.610875   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:17.610884   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:17.610895   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:17.661002   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:17.661041   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:17.675290   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:17.675318   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:17.743550   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:17.743572   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:17.743584   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:17.824479   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:17.824524   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:15.437260   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:17.937487   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:17.324337   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:19.819325   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:18.956605   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:20.957030   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:20.373687   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:20.389149   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:20.389244   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:20.429594   60933 cri.go:89] found id: ""
	I1216 21:01:20.429626   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.429634   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:20.429639   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:20.429693   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:20.473157   60933 cri.go:89] found id: ""
	I1216 21:01:20.473185   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.473193   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:20.473198   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:20.473264   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:20.512549   60933 cri.go:89] found id: ""
	I1216 21:01:20.512586   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.512597   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:20.512604   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:20.512676   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:20.549275   60933 cri.go:89] found id: ""
	I1216 21:01:20.549310   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.549323   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:20.549344   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:20.549408   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:20.587405   60933 cri.go:89] found id: ""
	I1216 21:01:20.587435   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.587443   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:20.587449   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:20.587515   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:20.625364   60933 cri.go:89] found id: ""
	I1216 21:01:20.625393   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.625400   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:20.625406   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:20.625456   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:20.664018   60933 cri.go:89] found id: ""
	I1216 21:01:20.664050   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.664061   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:20.664068   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:20.664117   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:20.703860   60933 cri.go:89] found id: ""
	I1216 21:01:20.703890   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.703898   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:20.703906   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:20.703918   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:20.754433   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:20.754470   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:20.770136   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:20.770172   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:20.854025   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:20.854049   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:20.854061   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:20.939628   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:20.939661   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:23.489645   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:23.503603   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:23.503667   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:23.543044   60933 cri.go:89] found id: ""
	I1216 21:01:23.543070   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.543077   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:23.543083   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:23.543131   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:23.580333   60933 cri.go:89] found id: ""
	I1216 21:01:23.580362   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.580371   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:23.580377   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:23.580428   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:23.616732   60933 cri.go:89] found id: ""
	I1216 21:01:23.616766   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.616778   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:23.616785   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:23.616834   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:23.655771   60933 cri.go:89] found id: ""
	I1216 21:01:23.655793   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.655801   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:23.655807   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:23.655861   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:23.694400   60933 cri.go:89] found id: ""
	I1216 21:01:23.694430   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.694437   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:23.694443   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:23.694500   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:23.732592   60933 cri.go:89] found id: ""
	I1216 21:01:23.732622   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.732630   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:23.732636   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:23.732688   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:23.769752   60933 cri.go:89] found id: ""
	I1216 21:01:23.769787   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.769801   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:23.769810   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:23.769892   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:23.806891   60933 cri.go:89] found id: ""
	I1216 21:01:23.806925   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.806936   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:23.806947   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:23.806963   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:23.822887   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:23.822912   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:23.898795   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:23.898817   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:23.898830   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:23.978036   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:23.978073   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:24.032500   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:24.032528   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:20.437888   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:22.936895   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:21.819859   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:23.820383   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:23.456331   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:25.960513   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:26.585937   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:26.599470   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:26.599543   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:26.635421   60933 cri.go:89] found id: ""
	I1216 21:01:26.635446   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.635455   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:26.635461   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:26.635527   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:26.675347   60933 cri.go:89] found id: ""
	I1216 21:01:26.675379   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.675390   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:26.675397   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:26.675464   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:26.715444   60933 cri.go:89] found id: ""
	I1216 21:01:26.715469   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.715480   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:26.715541   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:26.715619   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:26.753841   60933 cri.go:89] found id: ""
	I1216 21:01:26.753874   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.753893   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:26.753901   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:26.753963   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:26.791427   60933 cri.go:89] found id: ""
	I1216 21:01:26.791453   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.791464   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:26.791473   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:26.791539   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:26.832772   60933 cri.go:89] found id: ""
	I1216 21:01:26.832804   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.832816   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:26.832823   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:26.832887   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:26.869963   60933 cri.go:89] found id: ""
	I1216 21:01:26.869990   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.869997   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:26.870003   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:26.870068   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:26.906792   60933 cri.go:89] found id: ""
	I1216 21:01:26.906821   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.906862   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:26.906875   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:26.906894   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:26.994820   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:26.994863   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:27.034642   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:27.034686   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:27.089128   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:27.089168   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:27.104368   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:27.104401   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:27.179852   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:25.436696   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:27.937229   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:26.319568   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:28.820132   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:28.454880   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:30.455734   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:29.681052   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:29.695376   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:29.695464   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:29.735562   60933 cri.go:89] found id: ""
	I1216 21:01:29.735588   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.735596   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:29.735602   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:29.735650   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:29.772635   60933 cri.go:89] found id: ""
	I1216 21:01:29.772663   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.772672   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:29.772678   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:29.772737   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:29.810471   60933 cri.go:89] found id: ""
	I1216 21:01:29.810499   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.810509   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:29.810516   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:29.810575   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:29.845917   60933 cri.go:89] found id: ""
	I1216 21:01:29.845952   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.845966   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:29.845975   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:29.846048   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:29.883866   60933 cri.go:89] found id: ""
	I1216 21:01:29.883892   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.883900   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:29.883906   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:29.883968   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:29.920696   60933 cri.go:89] found id: ""
	I1216 21:01:29.920729   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.920740   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:29.920748   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:29.920831   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:29.957977   60933 cri.go:89] found id: ""
	I1216 21:01:29.958056   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.958069   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:29.958079   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:29.958144   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:29.995436   60933 cri.go:89] found id: ""
	I1216 21:01:29.995464   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.995472   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:29.995481   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:29.995492   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:30.046819   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:30.046859   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:30.062754   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:30.062807   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:30.138932   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:30.138959   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:30.138975   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:30.225720   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:30.225768   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:32.768185   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:32.782642   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:32.782729   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:32.821995   60933 cri.go:89] found id: ""
	I1216 21:01:32.822029   60933 logs.go:282] 0 containers: []
	W1216 21:01:32.822040   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:32.822048   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:32.822112   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:32.858453   60933 cri.go:89] found id: ""
	I1216 21:01:32.858487   60933 logs.go:282] 0 containers: []
	W1216 21:01:32.858497   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:32.858504   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:32.858570   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:32.896269   60933 cri.go:89] found id: ""
	I1216 21:01:32.896304   60933 logs.go:282] 0 containers: []
	W1216 21:01:32.896316   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:32.896323   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:32.896384   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:32.936795   60933 cri.go:89] found id: ""
	I1216 21:01:32.936820   60933 logs.go:282] 0 containers: []
	W1216 21:01:32.936832   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:32.936838   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:32.936904   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:32.974779   60933 cri.go:89] found id: ""
	I1216 21:01:32.974810   60933 logs.go:282] 0 containers: []
	W1216 21:01:32.974821   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:32.974828   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:32.974892   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:33.012201   60933 cri.go:89] found id: ""
	I1216 21:01:33.012226   60933 logs.go:282] 0 containers: []
	W1216 21:01:33.012234   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:33.012239   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:33.012287   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:33.049777   60933 cri.go:89] found id: ""
	I1216 21:01:33.049803   60933 logs.go:282] 0 containers: []
	W1216 21:01:33.049811   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:33.049816   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:33.049873   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:33.087820   60933 cri.go:89] found id: ""
	I1216 21:01:33.087851   60933 logs.go:282] 0 containers: []
	W1216 21:01:33.087859   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:33.087870   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:33.087885   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:33.140816   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:33.140854   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:33.154817   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:33.154855   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:33.231445   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:33.231474   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:33.231496   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:33.311547   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:33.311586   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:29.938045   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:32.436934   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:34.444209   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:31.321180   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:33.324091   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:32.956028   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:35.454994   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:37.455094   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:35.855686   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:35.870404   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:35.870485   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:35.908175   60933 cri.go:89] found id: ""
	I1216 21:01:35.908204   60933 logs.go:282] 0 containers: []
	W1216 21:01:35.908215   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:35.908222   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:35.908284   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:35.955456   60933 cri.go:89] found id: ""
	I1216 21:01:35.955483   60933 logs.go:282] 0 containers: []
	W1216 21:01:35.955494   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:35.955501   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:35.955562   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:35.995170   60933 cri.go:89] found id: ""
	I1216 21:01:35.995201   60933 logs.go:282] 0 containers: []
	W1216 21:01:35.995211   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:35.995218   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:35.995305   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:36.033729   60933 cri.go:89] found id: ""
	I1216 21:01:36.033758   60933 logs.go:282] 0 containers: []
	W1216 21:01:36.033769   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:36.033776   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:36.033840   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:36.072756   60933 cri.go:89] found id: ""
	I1216 21:01:36.072787   60933 logs.go:282] 0 containers: []
	W1216 21:01:36.072799   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:36.072806   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:36.072873   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:36.112149   60933 cri.go:89] found id: ""
	I1216 21:01:36.112187   60933 logs.go:282] 0 containers: []
	W1216 21:01:36.112198   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:36.112205   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:36.112258   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:36.148742   60933 cri.go:89] found id: ""
	I1216 21:01:36.148770   60933 logs.go:282] 0 containers: []
	W1216 21:01:36.148781   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:36.148789   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:36.148855   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:36.192827   60933 cri.go:89] found id: ""
	I1216 21:01:36.192864   60933 logs.go:282] 0 containers: []
	W1216 21:01:36.192875   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:36.192886   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:36.192901   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:36.243822   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:36.243867   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:36.258258   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:36.258292   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:36.342847   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:36.342876   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:36.342891   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:36.424741   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:36.424780   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:38.967334   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:38.982208   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:38.982283   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:39.023903   60933 cri.go:89] found id: ""
	I1216 21:01:39.023931   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.023939   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:39.023945   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:39.023997   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:39.070314   60933 cri.go:89] found id: ""
	I1216 21:01:39.070342   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.070351   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:39.070359   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:39.070423   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:39.115081   60933 cri.go:89] found id: ""
	I1216 21:01:39.115106   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.115113   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:39.115119   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:39.115178   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:39.151933   60933 cri.go:89] found id: ""
	I1216 21:01:39.151959   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.151967   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:39.151972   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:39.152022   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:39.192280   60933 cri.go:89] found id: ""
	I1216 21:01:39.192307   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.192315   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:39.192322   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:39.192370   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:39.228792   60933 cri.go:89] found id: ""
	I1216 21:01:39.228814   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.228822   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:39.228827   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:39.228887   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:39.266823   60933 cri.go:89] found id: ""
	I1216 21:01:39.266847   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.266854   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:39.266860   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:39.266908   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:39.301317   60933 cri.go:89] found id: ""
	I1216 21:01:39.301340   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.301348   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:39.301361   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:39.301372   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:39.386615   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:39.386663   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:36.936376   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:38.936968   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:35.820025   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:37.820396   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:40.319915   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:39.457790   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:41.955758   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:39.433079   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:39.433112   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:39.489422   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:39.489458   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:39.504223   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:39.504259   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:39.587898   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:42.088900   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:42.103768   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:42.103854   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:42.141956   60933 cri.go:89] found id: ""
	I1216 21:01:42.142026   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.142040   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:42.142049   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:42.142117   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:42.178754   60933 cri.go:89] found id: ""
	I1216 21:01:42.178782   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.178818   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:42.178833   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:42.178891   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:42.215872   60933 cri.go:89] found id: ""
	I1216 21:01:42.215905   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.215916   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:42.215923   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:42.215991   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:42.253854   60933 cri.go:89] found id: ""
	I1216 21:01:42.253885   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.253896   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:42.253904   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:42.253972   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:42.290963   60933 cri.go:89] found id: ""
	I1216 21:01:42.291008   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.291023   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:42.291039   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:42.291109   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:42.332920   60933 cri.go:89] found id: ""
	I1216 21:01:42.332946   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.332953   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:42.332959   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:42.333006   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:42.375060   60933 cri.go:89] found id: ""
	I1216 21:01:42.375093   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.375104   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:42.375112   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:42.375189   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:42.416593   60933 cri.go:89] found id: ""
	I1216 21:01:42.416621   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.416631   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:42.416639   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:42.416651   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:42.475204   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:42.475260   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:42.491022   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:42.491057   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:42.566645   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:42.566672   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:42.566687   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:42.646815   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:42.646856   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:41.436872   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:43.936734   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:42.321709   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:44.321985   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:43.955807   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:46.455508   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:45.191912   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:45.205487   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:45.205548   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:45.245350   60933 cri.go:89] found id: ""
	I1216 21:01:45.245389   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.245397   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:45.245404   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:45.245482   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:45.302126   60933 cri.go:89] found id: ""
	I1216 21:01:45.302158   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.302171   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:45.302178   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:45.302251   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:45.342888   60933 cri.go:89] found id: ""
	I1216 21:01:45.342917   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.342932   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:45.342937   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:45.342990   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:45.381545   60933 cri.go:89] found id: ""
	I1216 21:01:45.381574   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.381585   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:45.381593   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:45.381652   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:45.418081   60933 cri.go:89] found id: ""
	I1216 21:01:45.418118   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.418131   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:45.418138   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:45.418207   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:45.458610   60933 cri.go:89] found id: ""
	I1216 21:01:45.458637   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.458647   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:45.458655   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:45.458713   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:45.500102   60933 cri.go:89] found id: ""
	I1216 21:01:45.500137   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.500148   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:45.500155   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:45.500217   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:45.542074   60933 cri.go:89] found id: ""
	I1216 21:01:45.542103   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.542113   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:45.542122   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:45.542134   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:45.597577   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:45.597614   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:45.614028   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:45.614075   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:45.693014   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:45.693039   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:45.693056   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:45.772260   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:45.772295   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:48.317073   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:48.332176   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:48.332242   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:48.369946   60933 cri.go:89] found id: ""
	I1216 21:01:48.369976   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.369988   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:48.369994   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:48.370059   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:48.407628   60933 cri.go:89] found id: ""
	I1216 21:01:48.407661   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.407672   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:48.407680   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:48.407742   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:48.444377   60933 cri.go:89] found id: ""
	I1216 21:01:48.444403   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.444411   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:48.444416   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:48.444467   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:48.485674   60933 cri.go:89] found id: ""
	I1216 21:01:48.485710   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.485722   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:48.485730   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:48.485785   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:48.530577   60933 cri.go:89] found id: ""
	I1216 21:01:48.530610   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.530621   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:48.530628   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:48.530693   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:48.567128   60933 cri.go:89] found id: ""
	I1216 21:01:48.567151   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.567159   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:48.567165   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:48.567216   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:48.603294   60933 cri.go:89] found id: ""
	I1216 21:01:48.603320   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.603327   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:48.603333   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:48.603392   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:48.646221   60933 cri.go:89] found id: ""
	I1216 21:01:48.646253   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.646265   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:48.646288   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:48.646318   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:48.697589   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:48.697624   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:48.711916   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:48.711947   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:48.789068   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:48.789097   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:48.789113   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:48.872340   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:48.872378   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:45.937806   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:48.437160   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:46.819986   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:48.821079   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:48.456975   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:50.956101   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:51.418176   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:51.434851   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:51.434948   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:51.478935   60933 cri.go:89] found id: ""
	I1216 21:01:51.478963   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.478975   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:51.478982   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:51.479043   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:51.524581   60933 cri.go:89] found id: ""
	I1216 21:01:51.524611   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.524622   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:51.524629   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:51.524686   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:51.563479   60933 cri.go:89] found id: ""
	I1216 21:01:51.563507   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.563516   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:51.563521   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:51.563578   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:51.601931   60933 cri.go:89] found id: ""
	I1216 21:01:51.601964   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.601975   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:51.601982   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:51.602044   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:51.638984   60933 cri.go:89] found id: ""
	I1216 21:01:51.639014   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.639025   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:51.639032   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:51.639093   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:51.681137   60933 cri.go:89] found id: ""
	I1216 21:01:51.681167   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.681178   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:51.681185   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:51.681263   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:51.722904   60933 cri.go:89] found id: ""
	I1216 21:01:51.722932   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.722941   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:51.722946   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:51.722994   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:51.794403   60933 cri.go:89] found id: ""
	I1216 21:01:51.794434   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.794444   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:51.794453   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:51.794464   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:51.850688   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:51.850724   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:51.866049   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:51.866079   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:51.949844   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:51.949880   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:51.949894   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:52.028981   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:52.029023   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:50.936202   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:52.936839   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:51.321959   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:53.819864   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:53.455360   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:55.954957   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:54.570192   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:54.585405   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:54.585489   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:54.627670   60933 cri.go:89] found id: ""
	I1216 21:01:54.627701   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.627712   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:54.627719   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:54.627782   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:54.671226   60933 cri.go:89] found id: ""
	I1216 21:01:54.671265   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.671276   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:54.671283   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:54.671337   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:54.705549   60933 cri.go:89] found id: ""
	I1216 21:01:54.705581   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.705592   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:54.705600   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:54.705663   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:54.743638   60933 cri.go:89] found id: ""
	I1216 21:01:54.743664   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.743671   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:54.743677   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:54.743728   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:54.781714   60933 cri.go:89] found id: ""
	I1216 21:01:54.781750   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.781760   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:54.781767   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:54.781831   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:54.830808   60933 cri.go:89] found id: ""
	I1216 21:01:54.830840   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.830851   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:54.830859   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:54.830923   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:54.868539   60933 cri.go:89] found id: ""
	I1216 21:01:54.868565   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.868573   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:54.868578   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:54.868626   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:54.906554   60933 cri.go:89] found id: ""
	I1216 21:01:54.906587   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.906595   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:54.906604   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:54.906617   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:54.960664   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:54.960696   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:54.975657   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:54.975686   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:55.052266   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:55.052293   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:55.052320   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:55.137894   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:55.137937   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:57.682769   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:57.699102   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:57.699184   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:57.764651   60933 cri.go:89] found id: ""
	I1216 21:01:57.764684   60933 logs.go:282] 0 containers: []
	W1216 21:01:57.764692   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:57.764698   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:57.764755   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:57.805358   60933 cri.go:89] found id: ""
	I1216 21:01:57.805385   60933 logs.go:282] 0 containers: []
	W1216 21:01:57.805395   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:57.805404   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:57.805474   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:57.843589   60933 cri.go:89] found id: ""
	I1216 21:01:57.843623   60933 logs.go:282] 0 containers: []
	W1216 21:01:57.843634   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:57.843644   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:57.843716   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:57.881725   60933 cri.go:89] found id: ""
	I1216 21:01:57.881748   60933 logs.go:282] 0 containers: []
	W1216 21:01:57.881756   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:57.881761   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:57.881811   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:57.922252   60933 cri.go:89] found id: ""
	I1216 21:01:57.922293   60933 logs.go:282] 0 containers: []
	W1216 21:01:57.922305   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:57.922322   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:57.922385   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:57.962532   60933 cri.go:89] found id: ""
	I1216 21:01:57.962555   60933 logs.go:282] 0 containers: []
	W1216 21:01:57.962562   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:57.962567   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:57.962615   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:58.002021   60933 cri.go:89] found id: ""
	I1216 21:01:58.002056   60933 logs.go:282] 0 containers: []
	W1216 21:01:58.002067   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:58.002074   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:58.002137   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:58.035648   60933 cri.go:89] found id: ""
	I1216 21:01:58.035672   60933 logs.go:282] 0 containers: []
	W1216 21:01:58.035680   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:58.035688   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:58.035699   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:58.116142   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:58.116177   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:58.157683   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:58.157717   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:58.211686   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:58.211722   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:58.226385   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:58.226409   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:58.302287   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:54.937208   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:57.437396   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:59.438489   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:56.326836   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:58.818671   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:57.955980   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:00.455212   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:00.802544   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:00.816325   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:00.816405   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:00.853031   60933 cri.go:89] found id: ""
	I1216 21:02:00.853057   60933 logs.go:282] 0 containers: []
	W1216 21:02:00.853065   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:00.853070   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:00.853122   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:00.891040   60933 cri.go:89] found id: ""
	I1216 21:02:00.891071   60933 logs.go:282] 0 containers: []
	W1216 21:02:00.891082   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:00.891089   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:00.891151   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:00.929145   60933 cri.go:89] found id: ""
	I1216 21:02:00.929168   60933 logs.go:282] 0 containers: []
	W1216 21:02:00.929175   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:00.929181   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:00.929227   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:00.976469   60933 cri.go:89] found id: ""
	I1216 21:02:00.976492   60933 logs.go:282] 0 containers: []
	W1216 21:02:00.976500   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:00.976505   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:00.976553   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:01.015053   60933 cri.go:89] found id: ""
	I1216 21:02:01.015078   60933 logs.go:282] 0 containers: []
	W1216 21:02:01.015086   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:01.015092   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:01.015150   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:01.052859   60933 cri.go:89] found id: ""
	I1216 21:02:01.052891   60933 logs.go:282] 0 containers: []
	W1216 21:02:01.052902   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:01.052909   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:01.053028   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:01.091209   60933 cri.go:89] found id: ""
	I1216 21:02:01.091238   60933 logs.go:282] 0 containers: []
	W1216 21:02:01.091259   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:01.091266   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:01.091341   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:01.127013   60933 cri.go:89] found id: ""
	I1216 21:02:01.127038   60933 logs.go:282] 0 containers: []
	W1216 21:02:01.127047   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:01.127058   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:01.127072   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:01.179642   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:01.179697   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:01.196390   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:01.196416   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:01.275446   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:01.275478   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:01.275493   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:01.354391   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:01.354429   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:03.897672   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:03.911596   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:03.911654   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:03.955700   60933 cri.go:89] found id: ""
	I1216 21:02:03.955726   60933 logs.go:282] 0 containers: []
	W1216 21:02:03.955735   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:03.955741   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:03.955803   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:03.995661   60933 cri.go:89] found id: ""
	I1216 21:02:03.995696   60933 logs.go:282] 0 containers: []
	W1216 21:02:03.995706   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:03.995713   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:03.995772   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:04.031368   60933 cri.go:89] found id: ""
	I1216 21:02:04.031391   60933 logs.go:282] 0 containers: []
	W1216 21:02:04.031398   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:04.031406   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:04.031455   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:04.067633   60933 cri.go:89] found id: ""
	I1216 21:02:04.067659   60933 logs.go:282] 0 containers: []
	W1216 21:02:04.067666   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:04.067671   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:04.067719   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:04.105734   60933 cri.go:89] found id: ""
	I1216 21:02:04.105758   60933 logs.go:282] 0 containers: []
	W1216 21:02:04.105768   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:04.105773   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:04.105824   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:04.146542   60933 cri.go:89] found id: ""
	I1216 21:02:04.146564   60933 logs.go:282] 0 containers: []
	W1216 21:02:04.146571   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:04.146577   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:04.146623   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:04.184433   60933 cri.go:89] found id: ""
	I1216 21:02:04.184462   60933 logs.go:282] 0 containers: []
	W1216 21:02:04.184473   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:04.184480   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:04.184551   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:04.223077   60933 cri.go:89] found id: ""
	I1216 21:02:04.223106   60933 logs.go:282] 0 containers: []
	W1216 21:02:04.223117   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:04.223127   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:04.223140   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:04.279618   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:04.279656   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:04.295841   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:04.295865   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:04.372609   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:04.372632   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:04.372648   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:01.937175   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:03.937249   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:00.819801   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:02.820563   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:05.320087   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:02.955461   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:05.455023   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:07.456981   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:04.457597   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:04.457631   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:07.006004   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:07.020394   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:07.020537   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:07.064242   60933 cri.go:89] found id: ""
	I1216 21:02:07.064274   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.064283   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:07.064289   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:07.064337   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:07.108865   60933 cri.go:89] found id: ""
	I1216 21:02:07.108899   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.108910   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:07.108917   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:07.108985   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:07.149021   60933 cri.go:89] found id: ""
	I1216 21:02:07.149051   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.149060   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:07.149066   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:07.149120   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:07.187808   60933 cri.go:89] found id: ""
	I1216 21:02:07.187833   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.187843   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:07.187850   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:07.187912   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:07.228748   60933 cri.go:89] found id: ""
	I1216 21:02:07.228774   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.228785   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:07.228792   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:07.228853   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:07.267961   60933 cri.go:89] found id: ""
	I1216 21:02:07.267996   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.268012   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:07.268021   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:07.268099   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:07.312464   60933 cri.go:89] found id: ""
	I1216 21:02:07.312491   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.312498   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:07.312503   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:07.312554   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:07.351902   60933 cri.go:89] found id: ""
	I1216 21:02:07.351933   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.351946   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:07.351958   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:07.351974   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:07.405985   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:07.406050   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:07.420796   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:07.420842   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:07.506527   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:07.506559   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:07.506574   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:07.587965   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:07.588001   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:06.437434   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:08.937843   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:07.320229   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:09.819940   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:09.954900   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:11.955004   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:10.132876   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:10.146785   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:10.146858   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:10.189278   60933 cri.go:89] found id: ""
	I1216 21:02:10.189312   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.189324   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:10.189332   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:10.189402   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:10.228331   60933 cri.go:89] found id: ""
	I1216 21:02:10.228370   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.228378   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:10.228383   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:10.228436   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:10.266424   60933 cri.go:89] found id: ""
	I1216 21:02:10.266458   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.266470   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:10.266478   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:10.266542   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:10.305865   60933 cri.go:89] found id: ""
	I1216 21:02:10.305890   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.305902   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:10.305909   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:10.305968   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:10.344211   60933 cri.go:89] found id: ""
	I1216 21:02:10.344239   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.344247   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:10.344253   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:10.344314   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:10.381939   60933 cri.go:89] found id: ""
	I1216 21:02:10.381993   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.382004   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:10.382011   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:10.382076   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:10.418882   60933 cri.go:89] found id: ""
	I1216 21:02:10.418908   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.418915   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:10.418921   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:10.418972   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:10.458397   60933 cri.go:89] found id: ""
	I1216 21:02:10.458425   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.458434   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:10.458447   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:10.458462   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:10.472152   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:10.472180   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:10.545888   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:10.545913   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:10.545926   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:10.627223   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:10.627293   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:10.676606   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:10.676633   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:13.227283   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:13.242871   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:13.242954   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:13.280676   60933 cri.go:89] found id: ""
	I1216 21:02:13.280711   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.280723   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:13.280731   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:13.280786   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:13.321357   60933 cri.go:89] found id: ""
	I1216 21:02:13.321389   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.321400   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:13.321408   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:13.321474   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:13.359002   60933 cri.go:89] found id: ""
	I1216 21:02:13.359030   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.359042   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:13.359050   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:13.359116   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:13.395879   60933 cri.go:89] found id: ""
	I1216 21:02:13.395922   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.395941   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:13.395950   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:13.396017   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:13.436761   60933 cri.go:89] found id: ""
	I1216 21:02:13.436781   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.436788   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:13.436793   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:13.436852   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:13.478839   60933 cri.go:89] found id: ""
	I1216 21:02:13.478869   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.478877   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:13.478883   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:13.478947   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:13.520013   60933 cri.go:89] found id: ""
	I1216 21:02:13.520037   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.520044   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:13.520050   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:13.520124   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:13.556973   60933 cri.go:89] found id: ""
	I1216 21:02:13.557001   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.557013   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:13.557023   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:13.557039   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:13.613499   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:13.613537   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:13.628689   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:13.628724   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:13.706556   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:13.706576   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:13.706589   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:13.786379   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:13.786419   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:11.436179   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:13.436800   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:11.820109   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:13.820778   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:14.457666   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:16.955591   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:16.333578   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:16.347948   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:16.348020   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:16.386928   60933 cri.go:89] found id: ""
	I1216 21:02:16.386955   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.386963   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:16.386969   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:16.387033   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:16.425192   60933 cri.go:89] found id: ""
	I1216 21:02:16.425253   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.425265   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:16.425273   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:16.425355   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:16.465522   60933 cri.go:89] found id: ""
	I1216 21:02:16.465554   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.465565   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:16.465573   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:16.465638   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:16.504567   60933 cri.go:89] found id: ""
	I1216 21:02:16.504605   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.504616   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:16.504624   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:16.504694   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:16.541823   60933 cri.go:89] found id: ""
	I1216 21:02:16.541852   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.541864   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:16.541872   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:16.541942   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:16.580898   60933 cri.go:89] found id: ""
	I1216 21:02:16.580927   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.580938   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:16.580946   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:16.581003   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:16.626006   60933 cri.go:89] found id: ""
	I1216 21:02:16.626036   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.626046   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:16.626053   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:16.626109   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:16.662686   60933 cri.go:89] found id: ""
	I1216 21:02:16.662712   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.662719   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:16.662728   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:16.662740   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:16.717939   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:16.717978   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:16.733431   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:16.733466   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:16.807379   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:16.807409   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:16.807421   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:16.896455   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:16.896492   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:15.437791   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:17.935778   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:16.321167   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:18.819624   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:18.955621   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:20.956220   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:19.442959   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:19.458684   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:19.458749   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:19.499907   60933 cri.go:89] found id: ""
	I1216 21:02:19.499938   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.499947   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:19.499954   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:19.500002   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:19.538010   60933 cri.go:89] found id: ""
	I1216 21:02:19.538035   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.538043   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:19.538049   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:19.538148   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:19.577097   60933 cri.go:89] found id: ""
	I1216 21:02:19.577131   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.577139   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:19.577145   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:19.577196   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:19.617288   60933 cri.go:89] found id: ""
	I1216 21:02:19.617316   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.617326   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:19.617332   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:19.617392   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:19.658066   60933 cri.go:89] found id: ""
	I1216 21:02:19.658090   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.658097   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:19.658103   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:19.658153   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:19.696077   60933 cri.go:89] found id: ""
	I1216 21:02:19.696108   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.696121   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:19.696131   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:19.696189   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:19.737657   60933 cri.go:89] found id: ""
	I1216 21:02:19.737692   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.737704   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:19.737712   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:19.737776   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:19.778699   60933 cri.go:89] found id: ""
	I1216 21:02:19.778729   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.778738   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:19.778746   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:19.778757   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:19.841941   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:19.841979   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:19.857752   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:19.857788   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:19.935980   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:19.936004   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:19.936020   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:20.019999   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:20.020046   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:22.566398   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:22.580376   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:22.580472   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:22.620240   60933 cri.go:89] found id: ""
	I1216 21:02:22.620273   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.620284   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:22.620292   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:22.620355   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:22.656413   60933 cri.go:89] found id: ""
	I1216 21:02:22.656444   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.656455   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:22.656463   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:22.656531   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:22.690956   60933 cri.go:89] found id: ""
	I1216 21:02:22.690978   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.690986   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:22.690992   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:22.691040   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:22.734851   60933 cri.go:89] found id: ""
	I1216 21:02:22.734885   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.734895   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:22.734903   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:22.734969   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:22.774416   60933 cri.go:89] found id: ""
	I1216 21:02:22.774450   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.774461   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:22.774467   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:22.774535   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:22.811162   60933 cri.go:89] found id: ""
	I1216 21:02:22.811192   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.811204   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:22.811212   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:22.811296   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:22.851955   60933 cri.go:89] found id: ""
	I1216 21:02:22.851980   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.851987   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:22.851993   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:22.852051   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:22.888699   60933 cri.go:89] found id: ""
	I1216 21:02:22.888725   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.888736   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:22.888747   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:22.888769   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:22.944065   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:22.944100   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:22.960842   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:22.960872   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:23.036229   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:23.036251   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:23.036263   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:23.122493   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:23.122535   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:19.936687   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:21.937222   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:24.437190   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:20.820544   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:22.820771   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:25.319776   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:22.956523   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:25.456180   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:25.667995   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:25.682152   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:25.682222   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:25.719092   60933 cri.go:89] found id: ""
	I1216 21:02:25.719120   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.719130   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:25.719135   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:25.719190   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:25.757668   60933 cri.go:89] found id: ""
	I1216 21:02:25.757702   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.757712   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:25.757720   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:25.757791   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:25.809743   60933 cri.go:89] found id: ""
	I1216 21:02:25.809776   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.809787   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:25.809795   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:25.809857   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:25.849181   60933 cri.go:89] found id: ""
	I1216 21:02:25.849211   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.849222   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:25.849230   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:25.849295   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:25.891032   60933 cri.go:89] found id: ""
	I1216 21:02:25.891079   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.891091   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:25.891098   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:25.891169   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:25.930549   60933 cri.go:89] found id: ""
	I1216 21:02:25.930575   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.930583   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:25.930589   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:25.930639   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:25.971709   60933 cri.go:89] found id: ""
	I1216 21:02:25.971736   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.971744   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:25.971749   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:25.971797   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:26.007728   60933 cri.go:89] found id: ""
	I1216 21:02:26.007760   60933 logs.go:282] 0 containers: []
	W1216 21:02:26.007769   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:26.007778   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:26.007791   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:26.059710   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:26.059752   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:26.074596   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:26.074627   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:26.145892   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:26.145913   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:26.145924   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:26.225961   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:26.226000   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:28.772974   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:28.787001   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:28.787078   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:28.828176   60933 cri.go:89] found id: ""
	I1216 21:02:28.828206   60933 logs.go:282] 0 containers: []
	W1216 21:02:28.828214   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:28.828223   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:28.828292   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:28.872750   60933 cri.go:89] found id: ""
	I1216 21:02:28.872781   60933 logs.go:282] 0 containers: []
	W1216 21:02:28.872792   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:28.872798   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:28.872859   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:28.914844   60933 cri.go:89] found id: ""
	I1216 21:02:28.914871   60933 logs.go:282] 0 containers: []
	W1216 21:02:28.914879   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:28.914884   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:28.914934   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:28.953541   60933 cri.go:89] found id: ""
	I1216 21:02:28.953569   60933 logs.go:282] 0 containers: []
	W1216 21:02:28.953579   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:28.953587   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:28.953647   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:28.992768   60933 cri.go:89] found id: ""
	I1216 21:02:28.992797   60933 logs.go:282] 0 containers: []
	W1216 21:02:28.992808   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:28.992816   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:28.992882   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:29.030069   60933 cri.go:89] found id: ""
	I1216 21:02:29.030102   60933 logs.go:282] 0 containers: []
	W1216 21:02:29.030113   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:29.030121   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:29.030187   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:29.068629   60933 cri.go:89] found id: ""
	I1216 21:02:29.068658   60933 logs.go:282] 0 containers: []
	W1216 21:02:29.068666   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:29.068677   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:29.068726   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:29.103664   60933 cri.go:89] found id: ""
	I1216 21:02:29.103697   60933 logs.go:282] 0 containers: []
	W1216 21:02:29.103708   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:29.103719   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:29.103732   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:29.151225   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:29.151276   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:29.209448   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:29.209499   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:29.225232   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:29.225257   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:29.309812   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:29.309832   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:29.309846   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:26.937193   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:28.937302   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:27.320052   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:29.820220   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:27.956244   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:29.957111   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:32.456969   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:31.896263   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:31.912378   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:31.912455   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:31.950479   60933 cri.go:89] found id: ""
	I1216 21:02:31.950508   60933 logs.go:282] 0 containers: []
	W1216 21:02:31.950527   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:31.950535   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:31.950600   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:31.990479   60933 cri.go:89] found id: ""
	I1216 21:02:31.990504   60933 logs.go:282] 0 containers: []
	W1216 21:02:31.990515   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:31.990533   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:31.990599   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:32.032808   60933 cri.go:89] found id: ""
	I1216 21:02:32.032834   60933 logs.go:282] 0 containers: []
	W1216 21:02:32.032843   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:32.032853   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:32.032913   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:32.069719   60933 cri.go:89] found id: ""
	I1216 21:02:32.069748   60933 logs.go:282] 0 containers: []
	W1216 21:02:32.069759   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:32.069772   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:32.069830   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:32.106652   60933 cri.go:89] found id: ""
	I1216 21:02:32.106685   60933 logs.go:282] 0 containers: []
	W1216 21:02:32.106694   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:32.106701   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:32.106767   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:32.145921   60933 cri.go:89] found id: ""
	I1216 21:02:32.145949   60933 logs.go:282] 0 containers: []
	W1216 21:02:32.145957   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:32.145963   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:32.146014   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:32.206313   60933 cri.go:89] found id: ""
	I1216 21:02:32.206342   60933 logs.go:282] 0 containers: []
	W1216 21:02:32.206351   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:32.206356   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:32.206410   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:32.262757   60933 cri.go:89] found id: ""
	I1216 21:02:32.262794   60933 logs.go:282] 0 containers: []
	W1216 21:02:32.262806   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:32.262818   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:32.262832   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:32.320221   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:32.320251   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:32.375395   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:32.375437   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:32.391103   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:32.391137   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:32.474709   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:32.474741   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:32.474757   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:31.436689   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:33.436921   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:32.320631   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:34.819726   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:34.956369   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:37.455577   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:35.058809   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:35.073074   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:35.073157   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:35.115280   60933 cri.go:89] found id: ""
	I1216 21:02:35.115305   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.115312   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:35.115318   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:35.115378   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:35.151561   60933 cri.go:89] found id: ""
	I1216 21:02:35.151589   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.151597   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:35.151603   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:35.151654   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:35.192061   60933 cri.go:89] found id: ""
	I1216 21:02:35.192088   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.192095   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:35.192111   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:35.192161   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:35.231493   60933 cri.go:89] found id: ""
	I1216 21:02:35.231523   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.231531   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:35.231538   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:35.231586   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:35.271236   60933 cri.go:89] found id: ""
	I1216 21:02:35.271291   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.271300   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:35.271306   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:35.271368   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:35.309950   60933 cri.go:89] found id: ""
	I1216 21:02:35.309980   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.309991   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:35.309999   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:35.310062   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:35.347762   60933 cri.go:89] found id: ""
	I1216 21:02:35.347790   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.347797   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:35.347803   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:35.347851   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:35.390732   60933 cri.go:89] found id: ""
	I1216 21:02:35.390757   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.390765   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:35.390774   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:35.390785   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:35.447068   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:35.447112   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:35.462873   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:35.462904   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:35.541120   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:35.541145   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:35.541162   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:35.627073   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:35.627120   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:38.170994   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:38.194371   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:38.194434   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:38.248023   60933 cri.go:89] found id: ""
	I1216 21:02:38.248050   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.248061   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:38.248069   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:38.248147   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:38.300143   60933 cri.go:89] found id: ""
	I1216 21:02:38.300175   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.300185   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:38.300193   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:38.300253   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:38.345273   60933 cri.go:89] found id: ""
	I1216 21:02:38.345300   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.345308   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:38.345314   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:38.345389   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:38.383032   60933 cri.go:89] found id: ""
	I1216 21:02:38.383066   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.383075   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:38.383081   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:38.383135   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:38.426042   60933 cri.go:89] found id: ""
	I1216 21:02:38.426074   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.426086   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:38.426094   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:38.426159   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:38.467596   60933 cri.go:89] found id: ""
	I1216 21:02:38.467625   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.467634   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:38.467640   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:38.467692   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:38.509340   60933 cri.go:89] found id: ""
	I1216 21:02:38.509380   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.509391   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:38.509399   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:38.509470   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:38.549306   60933 cri.go:89] found id: ""
	I1216 21:02:38.549337   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.549354   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:38.549365   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:38.549381   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:38.564091   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:38.564131   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:38.639173   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:38.639201   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:38.639219   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:38.716320   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:38.716376   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:38.756779   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:38.756815   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:35.437230   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:37.938595   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:36.820302   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:39.319712   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:39.954558   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:41.955761   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:41.310680   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:41.327606   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:41.327684   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:41.371622   60933 cri.go:89] found id: ""
	I1216 21:02:41.371657   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.371670   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:41.371679   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:41.371739   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:41.408149   60933 cri.go:89] found id: ""
	I1216 21:02:41.408187   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.408198   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:41.408203   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:41.408252   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:41.448445   60933 cri.go:89] found id: ""
	I1216 21:02:41.448471   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.448478   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:41.448484   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:41.448533   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:41.489957   60933 cri.go:89] found id: ""
	I1216 21:02:41.489989   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.490000   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:41.490007   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:41.490069   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:41.532891   60933 cri.go:89] found id: ""
	I1216 21:02:41.532918   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.532930   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:41.532937   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:41.532992   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:41.570315   60933 cri.go:89] found id: ""
	I1216 21:02:41.570342   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.570351   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:41.570357   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:41.570455   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:41.606833   60933 cri.go:89] found id: ""
	I1216 21:02:41.606867   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.606880   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:41.606890   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:41.606959   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:41.643862   60933 cri.go:89] found id: ""
	I1216 21:02:41.643886   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.643894   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:41.643902   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:41.643914   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:41.657621   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:41.657654   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:41.732256   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:41.732281   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:41.732295   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:41.822045   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:41.822081   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:41.863900   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:41.863933   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:40.436149   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:42.436247   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:44.436916   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:41.321155   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:43.819721   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:43.956057   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:46.455802   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:44.425154   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:44.440148   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:44.440223   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:44.478216   60933 cri.go:89] found id: ""
	I1216 21:02:44.478247   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.478258   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:44.478266   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:44.478329   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:44.517054   60933 cri.go:89] found id: ""
	I1216 21:02:44.517078   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.517084   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:44.517090   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:44.517137   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:44.554683   60933 cri.go:89] found id: ""
	I1216 21:02:44.554778   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.554801   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:44.554845   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:44.554927   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:44.600748   60933 cri.go:89] found id: ""
	I1216 21:02:44.600788   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.600800   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:44.600809   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:44.600863   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:44.637564   60933 cri.go:89] found id: ""
	I1216 21:02:44.637592   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.637600   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:44.637606   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:44.637656   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:44.676619   60933 cri.go:89] found id: ""
	I1216 21:02:44.676662   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.676674   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:44.676683   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:44.676755   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:44.715920   60933 cri.go:89] found id: ""
	I1216 21:02:44.715956   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.715964   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:44.715970   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:44.716027   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:44.755134   60933 cri.go:89] found id: ""
	I1216 21:02:44.755167   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.755179   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:44.755191   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:44.755202   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:44.796135   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:44.796164   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:44.850550   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:44.850593   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:44.865278   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:44.865305   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:44.942987   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:44.943013   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:44.943026   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:47.529850   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:47.546292   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:47.546369   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:47.589597   60933 cri.go:89] found id: ""
	I1216 21:02:47.589627   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.589640   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:47.589648   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:47.589713   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:47.630998   60933 cri.go:89] found id: ""
	I1216 21:02:47.631030   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.631043   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:47.631051   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:47.631118   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:47.670118   60933 cri.go:89] found id: ""
	I1216 21:02:47.670150   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.670162   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:47.670169   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:47.670233   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:47.714516   60933 cri.go:89] found id: ""
	I1216 21:02:47.714549   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.714560   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:47.714568   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:47.714631   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:47.752042   60933 cri.go:89] found id: ""
	I1216 21:02:47.752074   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.752086   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:47.752093   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:47.752158   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:47.793612   60933 cri.go:89] found id: ""
	I1216 21:02:47.793645   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.793656   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:47.793664   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:47.793734   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:47.833489   60933 cri.go:89] found id: ""
	I1216 21:02:47.833518   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.833529   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:47.833541   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:47.833602   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:47.869744   60933 cri.go:89] found id: ""
	I1216 21:02:47.869772   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.869783   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:47.869793   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:47.869809   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:47.910640   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:47.910674   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:47.965747   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:47.965781   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:47.979760   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:47.979786   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:48.056887   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:48.056917   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:48.056933   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:46.439409   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:48.937248   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:46.320935   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:48.820700   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:48.955697   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:50.955859   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:50.641224   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:50.657267   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:50.657346   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:50.696890   60933 cri.go:89] found id: ""
	I1216 21:02:50.696916   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.696924   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:50.696930   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:50.696993   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:50.734485   60933 cri.go:89] found id: ""
	I1216 21:02:50.734514   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.734524   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:50.734533   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:50.734598   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:50.776241   60933 cri.go:89] found id: ""
	I1216 21:02:50.776268   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.776277   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:50.776283   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:50.776358   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:50.816449   60933 cri.go:89] found id: ""
	I1216 21:02:50.816482   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.816493   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:50.816501   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:50.816561   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:50.857458   60933 cri.go:89] found id: ""
	I1216 21:02:50.857481   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.857488   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:50.857494   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:50.857556   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:50.895367   60933 cri.go:89] found id: ""
	I1216 21:02:50.895391   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.895398   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:50.895404   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:50.895466   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:50.934101   60933 cri.go:89] found id: ""
	I1216 21:02:50.934128   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.934138   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:50.934152   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:50.934212   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:50.978625   60933 cri.go:89] found id: ""
	I1216 21:02:50.978654   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.978665   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:50.978675   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:50.978688   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:51.061867   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:51.061908   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:51.101188   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:51.101228   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:51.157426   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:51.157470   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:51.172835   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:51.172882   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:51.247678   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:53.748503   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:53.763357   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:53.763425   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:53.807963   60933 cri.go:89] found id: ""
	I1216 21:02:53.807990   60933 logs.go:282] 0 containers: []
	W1216 21:02:53.807999   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:53.808005   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:53.808063   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:53.846840   60933 cri.go:89] found id: ""
	I1216 21:02:53.846867   60933 logs.go:282] 0 containers: []
	W1216 21:02:53.846876   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:53.846881   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:53.846929   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:53.885099   60933 cri.go:89] found id: ""
	I1216 21:02:53.885131   60933 logs.go:282] 0 containers: []
	W1216 21:02:53.885146   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:53.885156   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:53.885226   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:53.923859   60933 cri.go:89] found id: ""
	I1216 21:02:53.923890   60933 logs.go:282] 0 containers: []
	W1216 21:02:53.923901   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:53.923908   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:53.923972   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:53.964150   60933 cri.go:89] found id: ""
	I1216 21:02:53.964176   60933 logs.go:282] 0 containers: []
	W1216 21:02:53.964186   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:53.964201   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:53.964265   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:54.004676   60933 cri.go:89] found id: ""
	I1216 21:02:54.004707   60933 logs.go:282] 0 containers: []
	W1216 21:02:54.004718   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:54.004725   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:54.004789   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:54.042560   60933 cri.go:89] found id: ""
	I1216 21:02:54.042585   60933 logs.go:282] 0 containers: []
	W1216 21:02:54.042595   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:54.042603   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:54.042666   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:54.081002   60933 cri.go:89] found id: ""
	I1216 21:02:54.081030   60933 logs.go:282] 0 containers: []
	W1216 21:02:54.081038   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:54.081046   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:54.081058   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:54.132825   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:54.132865   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:54.147793   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:54.147821   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:54.226668   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:54.226692   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:54.226704   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:54.307792   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:54.307832   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:50.938230   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:53.436746   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:50.820949   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:53.320283   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:52.957187   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:54.958212   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:57.456612   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:56.852207   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:56.866404   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:56.866469   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:56.911786   60933 cri.go:89] found id: ""
	I1216 21:02:56.911811   60933 logs.go:282] 0 containers: []
	W1216 21:02:56.911820   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:56.911829   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:56.911886   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:56.953491   60933 cri.go:89] found id: ""
	I1216 21:02:56.953520   60933 logs.go:282] 0 containers: []
	W1216 21:02:56.953535   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:56.953543   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:56.953610   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:56.991569   60933 cri.go:89] found id: ""
	I1216 21:02:56.991605   60933 logs.go:282] 0 containers: []
	W1216 21:02:56.991616   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:56.991622   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:56.991685   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:57.026808   60933 cri.go:89] found id: ""
	I1216 21:02:57.026837   60933 logs.go:282] 0 containers: []
	W1216 21:02:57.026845   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:57.026851   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:57.026913   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:57.065539   60933 cri.go:89] found id: ""
	I1216 21:02:57.065569   60933 logs.go:282] 0 containers: []
	W1216 21:02:57.065577   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:57.065583   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:57.065642   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:57.103911   60933 cri.go:89] found id: ""
	I1216 21:02:57.103942   60933 logs.go:282] 0 containers: []
	W1216 21:02:57.103952   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:57.103960   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:57.104015   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:57.141177   60933 cri.go:89] found id: ""
	I1216 21:02:57.141200   60933 logs.go:282] 0 containers: []
	W1216 21:02:57.141207   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:57.141213   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:57.141262   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:57.178532   60933 cri.go:89] found id: ""
	I1216 21:02:57.178590   60933 logs.go:282] 0 containers: []
	W1216 21:02:57.178604   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:57.178614   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:57.178629   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:57.234811   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:57.234846   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:57.251540   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:57.251569   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:57.329029   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:57.329061   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:57.329077   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:57.412624   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:57.412665   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:55.436981   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:57.438061   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:55.819607   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:57.819648   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:59.820705   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:59.955043   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:01.956284   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:59.960422   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:59.974889   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:59.974966   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:00.012641   60933 cri.go:89] found id: ""
	I1216 21:03:00.012669   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.012676   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:00.012682   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:00.012730   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:00.053730   60933 cri.go:89] found id: ""
	I1216 21:03:00.053766   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.053778   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:00.053785   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:00.053847   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:00.091213   60933 cri.go:89] found id: ""
	I1216 21:03:00.091261   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.091274   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:00.091283   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:00.091357   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:00.131357   60933 cri.go:89] found id: ""
	I1216 21:03:00.131382   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.131390   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:00.131396   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:00.131460   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:00.168331   60933 cri.go:89] found id: ""
	I1216 21:03:00.168362   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.168373   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:00.168380   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:00.168446   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:00.208326   60933 cri.go:89] found id: ""
	I1216 21:03:00.208360   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.208369   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:00.208377   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:00.208440   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:00.245775   60933 cri.go:89] found id: ""
	I1216 21:03:00.245800   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.245808   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:00.245814   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:00.245863   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:00.283062   60933 cri.go:89] found id: ""
	I1216 21:03:00.283091   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.283100   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:00.283108   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:00.283119   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:00.358767   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:00.358787   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:00.358799   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:00.443422   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:00.443460   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:00.491511   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:00.491551   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:00.566131   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:00.566172   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:03.080319   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:03.094733   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:03.094818   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:03.132388   60933 cri.go:89] found id: ""
	I1216 21:03:03.132419   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.132428   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:03.132433   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:03.132488   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:03.172345   60933 cri.go:89] found id: ""
	I1216 21:03:03.172374   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.172386   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:03.172393   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:03.172474   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:03.210444   60933 cri.go:89] found id: ""
	I1216 21:03:03.210479   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.210488   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:03.210494   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:03.210544   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:03.248605   60933 cri.go:89] found id: ""
	I1216 21:03:03.248644   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.248656   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:03.248664   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:03.248723   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:03.286822   60933 cri.go:89] found id: ""
	I1216 21:03:03.286854   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.286862   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:03.286868   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:03.286921   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:03.329304   60933 cri.go:89] found id: ""
	I1216 21:03:03.329333   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.329344   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:03.329352   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:03.329417   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:03.367337   60933 cri.go:89] found id: ""
	I1216 21:03:03.367361   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.367368   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:03.367373   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:03.367420   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:03.409799   60933 cri.go:89] found id: ""
	I1216 21:03:03.409821   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.409829   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:03.409838   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:03.409850   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:03.466941   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:03.466976   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:03.483090   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:03.483117   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:03.566835   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:03.566860   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:03.566878   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:03.649747   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:03.649793   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:59.936221   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:01.936251   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:03.936714   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:02.319063   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:04.319653   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:03.956397   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:05.956531   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:06.193505   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:06.207797   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:06.207878   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:06.245401   60933 cri.go:89] found id: ""
	I1216 21:03:06.245437   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.245448   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:06.245456   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:06.245521   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:06.301205   60933 cri.go:89] found id: ""
	I1216 21:03:06.301239   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.301250   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:06.301257   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:06.301326   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:06.340325   60933 cri.go:89] found id: ""
	I1216 21:03:06.340352   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.340362   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:06.340369   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:06.340429   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:06.378321   60933 cri.go:89] found id: ""
	I1216 21:03:06.378351   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.378359   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:06.378365   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:06.378422   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:06.416354   60933 cri.go:89] found id: ""
	I1216 21:03:06.416390   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.416401   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:06.416409   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:06.416473   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:06.459926   60933 cri.go:89] found id: ""
	I1216 21:03:06.459955   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.459967   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:06.459975   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:06.460063   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:06.501818   60933 cri.go:89] found id: ""
	I1216 21:03:06.501849   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.501860   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:06.501866   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:06.501926   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:06.537552   60933 cri.go:89] found id: ""
	I1216 21:03:06.537583   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.537598   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:06.537607   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:06.537621   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:06.592170   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:06.592212   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:06.607148   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:06.607183   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:06.676114   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:06.676140   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:06.676151   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:06.756009   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:06.756052   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:09.298166   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:09.313104   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:09.313189   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:09.356598   60933 cri.go:89] found id: ""
	I1216 21:03:09.356625   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.356640   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:09.356649   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:09.356715   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:05.937241   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:07.938858   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:06.322260   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:08.818974   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:08.455838   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:10.955332   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:09.395406   60933 cri.go:89] found id: ""
	I1216 21:03:09.395439   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.395449   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:09.395456   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:09.395521   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:09.440401   60933 cri.go:89] found id: ""
	I1216 21:03:09.440423   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.440430   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:09.440435   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:09.440504   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:09.478798   60933 cri.go:89] found id: ""
	I1216 21:03:09.478828   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.478843   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:09.478853   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:09.478921   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:09.515542   60933 cri.go:89] found id: ""
	I1216 21:03:09.515575   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.515587   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:09.515596   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:09.515654   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:09.554150   60933 cri.go:89] found id: ""
	I1216 21:03:09.554183   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.554194   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:09.554205   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:09.554279   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:09.591699   60933 cri.go:89] found id: ""
	I1216 21:03:09.591730   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.591740   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:09.591747   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:09.591811   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:09.629938   60933 cri.go:89] found id: ""
	I1216 21:03:09.629970   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.629980   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:09.629991   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:09.630008   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:09.711255   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:09.711284   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:09.711300   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:09.790202   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:09.790243   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:09.839567   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:09.839597   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:09.893010   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:09.893050   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:12.409934   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:12.423715   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:12.423789   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:12.461995   60933 cri.go:89] found id: ""
	I1216 21:03:12.462038   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.462046   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:12.462052   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:12.462101   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:12.501738   60933 cri.go:89] found id: ""
	I1216 21:03:12.501769   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.501779   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:12.501785   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:12.501833   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:12.541758   60933 cri.go:89] found id: ""
	I1216 21:03:12.541785   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.541795   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:12.541802   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:12.541850   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:12.579173   60933 cri.go:89] found id: ""
	I1216 21:03:12.579199   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.579206   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:12.579212   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:12.579302   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:12.624382   60933 cri.go:89] found id: ""
	I1216 21:03:12.624407   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.624418   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:12.624426   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:12.624488   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:12.665139   60933 cri.go:89] found id: ""
	I1216 21:03:12.665178   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.665190   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:12.665200   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:12.665274   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:12.711586   60933 cri.go:89] found id: ""
	I1216 21:03:12.711611   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.711619   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:12.711627   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:12.711678   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:12.761566   60933 cri.go:89] found id: ""
	I1216 21:03:12.761600   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.761612   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:12.761624   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:12.761640   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:12.824282   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:12.824315   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:12.839335   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:12.839371   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:12.918317   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:12.918341   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:12.918357   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:13.000375   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:13.000410   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:10.438136   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:12.936742   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:11.319284   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:13.320036   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:15.322965   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:12.955450   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:14.956186   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:16.956603   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:15.542372   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:15.556877   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:15.556960   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:15.599345   60933 cri.go:89] found id: ""
	I1216 21:03:15.599378   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.599389   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:15.599414   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:15.599479   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:15.642072   60933 cri.go:89] found id: ""
	I1216 21:03:15.642106   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.642116   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:15.642124   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:15.642189   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:15.679989   60933 cri.go:89] found id: ""
	I1216 21:03:15.680025   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.680036   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:15.680044   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:15.680103   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:15.718343   60933 cri.go:89] found id: ""
	I1216 21:03:15.718371   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.718378   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:15.718384   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:15.718433   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:15.759937   60933 cri.go:89] found id: ""
	I1216 21:03:15.759971   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.759981   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:15.759988   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:15.760081   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:15.801434   60933 cri.go:89] found id: ""
	I1216 21:03:15.801463   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.801471   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:15.801477   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:15.801540   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:15.841855   60933 cri.go:89] found id: ""
	I1216 21:03:15.841879   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.841886   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:15.841892   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:15.841962   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:15.883951   60933 cri.go:89] found id: ""
	I1216 21:03:15.883974   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.883982   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:15.883990   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:15.884004   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:15.960868   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:15.960902   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:16.005700   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:16.005730   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:16.061128   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:16.061165   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:16.075601   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:16.075630   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:16.147810   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:18.648677   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:18.663298   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:18.663367   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:18.713281   60933 cri.go:89] found id: ""
	I1216 21:03:18.713313   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.713324   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:18.713332   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:18.713396   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:18.764861   60933 cri.go:89] found id: ""
	I1216 21:03:18.764892   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.764905   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:18.764912   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:18.764978   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:18.816140   60933 cri.go:89] found id: ""
	I1216 21:03:18.816170   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.816180   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:18.816188   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:18.816251   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:18.852118   60933 cri.go:89] found id: ""
	I1216 21:03:18.852151   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.852163   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:18.852171   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:18.852235   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:18.887996   60933 cri.go:89] found id: ""
	I1216 21:03:18.888018   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.888025   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:18.888031   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:18.888089   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:18.925415   60933 cri.go:89] found id: ""
	I1216 21:03:18.925437   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.925445   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:18.925451   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:18.925498   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:18.964853   60933 cri.go:89] found id: ""
	I1216 21:03:18.964884   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.964892   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:18.964897   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:18.964964   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:19.000822   60933 cri.go:89] found id: ""
	I1216 21:03:19.000848   60933 logs.go:282] 0 containers: []
	W1216 21:03:19.000856   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:19.000865   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:19.000879   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:19.051571   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:19.051612   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:19.066737   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:19.066767   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:19.143120   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:19.143144   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:19.143156   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:19.229811   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:19.229850   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:15.437189   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:17.439345   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:17.820374   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:19.820460   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:19.455707   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:21.955275   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:21.776440   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:21.792869   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:21.792951   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:21.831100   60933 cri.go:89] found id: ""
	I1216 21:03:21.831127   60933 logs.go:282] 0 containers: []
	W1216 21:03:21.831134   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:21.831140   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:21.831196   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:21.869124   60933 cri.go:89] found id: ""
	I1216 21:03:21.869147   60933 logs.go:282] 0 containers: []
	W1216 21:03:21.869155   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:21.869160   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:21.869215   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:21.909891   60933 cri.go:89] found id: ""
	I1216 21:03:21.909926   60933 logs.go:282] 0 containers: []
	W1216 21:03:21.909938   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:21.909946   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:21.910032   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:21.949140   60933 cri.go:89] found id: ""
	I1216 21:03:21.949169   60933 logs.go:282] 0 containers: []
	W1216 21:03:21.949179   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:21.949186   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:21.949245   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:21.987741   60933 cri.go:89] found id: ""
	I1216 21:03:21.987771   60933 logs.go:282] 0 containers: []
	W1216 21:03:21.987780   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:21.987785   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:21.987839   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:22.025565   60933 cri.go:89] found id: ""
	I1216 21:03:22.025593   60933 logs.go:282] 0 containers: []
	W1216 21:03:22.025601   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:22.025607   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:22.025659   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:22.062076   60933 cri.go:89] found id: ""
	I1216 21:03:22.062110   60933 logs.go:282] 0 containers: []
	W1216 21:03:22.062120   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:22.062127   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:22.062198   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:22.102037   60933 cri.go:89] found id: ""
	I1216 21:03:22.102065   60933 logs.go:282] 0 containers: []
	W1216 21:03:22.102093   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:22.102105   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:22.102122   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:22.159185   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:22.159219   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:22.175139   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:22.175168   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:22.255769   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:22.255801   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:22.255817   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:22.339633   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:22.339681   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:19.937328   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:22.435709   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:24.436704   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:22.319227   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:24.819278   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:24.455668   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:26.956382   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:24.883865   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:24.898198   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:24.898287   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:24.939472   60933 cri.go:89] found id: ""
	I1216 21:03:24.939500   60933 logs.go:282] 0 containers: []
	W1216 21:03:24.939511   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:24.939518   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:24.939583   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:24.981798   60933 cri.go:89] found id: ""
	I1216 21:03:24.981822   60933 logs.go:282] 0 containers: []
	W1216 21:03:24.981829   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:24.981834   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:24.981889   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:25.021332   60933 cri.go:89] found id: ""
	I1216 21:03:25.021366   60933 logs.go:282] 0 containers: []
	W1216 21:03:25.021373   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:25.021379   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:25.021431   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:25.057811   60933 cri.go:89] found id: ""
	I1216 21:03:25.057836   60933 logs.go:282] 0 containers: []
	W1216 21:03:25.057843   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:25.057848   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:25.057907   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:25.093852   60933 cri.go:89] found id: ""
	I1216 21:03:25.093881   60933 logs.go:282] 0 containers: []
	W1216 21:03:25.093890   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:25.093895   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:25.093945   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:25.132779   60933 cri.go:89] found id: ""
	I1216 21:03:25.132813   60933 logs.go:282] 0 containers: []
	W1216 21:03:25.132825   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:25.132834   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:25.132912   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:25.173942   60933 cri.go:89] found id: ""
	I1216 21:03:25.173967   60933 logs.go:282] 0 containers: []
	W1216 21:03:25.173974   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:25.173990   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:25.174048   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:25.213105   60933 cri.go:89] found id: ""
	I1216 21:03:25.213127   60933 logs.go:282] 0 containers: []
	W1216 21:03:25.213135   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:25.213144   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:25.213155   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:25.267517   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:25.267557   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:25.284144   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:25.284177   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:25.362901   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:25.362931   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:25.362947   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:25.450193   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:25.450227   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:27.995716   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:28.012044   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:28.012138   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:28.050404   60933 cri.go:89] found id: ""
	I1216 21:03:28.050432   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.050441   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:28.050446   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:28.050492   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:28.087830   60933 cri.go:89] found id: ""
	I1216 21:03:28.087855   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.087862   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:28.087885   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:28.087933   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:28.125122   60933 cri.go:89] found id: ""
	I1216 21:03:28.125147   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.125154   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:28.125160   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:28.125233   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:28.160619   60933 cri.go:89] found id: ""
	I1216 21:03:28.160646   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.160655   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:28.160661   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:28.160726   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:28.198951   60933 cri.go:89] found id: ""
	I1216 21:03:28.198977   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.198986   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:28.198993   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:28.199059   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:28.236596   60933 cri.go:89] found id: ""
	I1216 21:03:28.236621   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.236629   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:28.236635   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:28.236707   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:28.273955   60933 cri.go:89] found id: ""
	I1216 21:03:28.273979   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.273986   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:28.273992   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:28.274061   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:28.311908   60933 cri.go:89] found id: ""
	I1216 21:03:28.311943   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.311954   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:28.311965   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:28.311979   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:28.363870   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:28.363910   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:28.379919   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:28.379945   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:28.459998   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:28.460019   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:28.460030   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:28.543229   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:28.543306   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:26.936661   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:29.437169   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:26.820563   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:29.319981   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:28.956791   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:31.456708   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:31.086525   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:31.100833   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:31.100950   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:31.141356   60933 cri.go:89] found id: ""
	I1216 21:03:31.141385   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.141396   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:31.141403   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:31.141465   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:31.176609   60933 cri.go:89] found id: ""
	I1216 21:03:31.176641   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.176650   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:31.176657   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:31.176721   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:31.213959   60933 cri.go:89] found id: ""
	I1216 21:03:31.213984   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.213991   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:31.213997   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:31.214058   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:31.255183   60933 cri.go:89] found id: ""
	I1216 21:03:31.255208   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.255215   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:31.255220   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:31.255297   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:31.293475   60933 cri.go:89] found id: ""
	I1216 21:03:31.293501   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.293508   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:31.293514   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:31.293561   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:31.332010   60933 cri.go:89] found id: ""
	I1216 21:03:31.332041   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.332052   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:31.332061   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:31.332119   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:31.370301   60933 cri.go:89] found id: ""
	I1216 21:03:31.370331   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.370342   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:31.370349   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:31.370414   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:31.419526   60933 cri.go:89] found id: ""
	I1216 21:03:31.419553   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.419561   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:31.419570   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:31.419583   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:31.480125   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:31.480160   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:31.495464   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:31.495497   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:31.570747   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:31.570773   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:31.570788   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:31.651521   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:31.651564   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:34.200969   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:34.216519   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:34.216596   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:34.254185   60933 cri.go:89] found id: ""
	I1216 21:03:34.254218   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.254227   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:34.254242   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:34.254312   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:34.293194   60933 cri.go:89] found id: ""
	I1216 21:03:34.293225   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.293236   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:34.293242   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:34.293297   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:34.335002   60933 cri.go:89] found id: ""
	I1216 21:03:34.335030   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.335042   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:34.335050   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:34.335112   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:34.370854   60933 cri.go:89] found id: ""
	I1216 21:03:34.370880   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.370887   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:34.370893   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:34.370938   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:31.439597   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:33.935941   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:31.820337   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:33.820497   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:33.955185   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:36.455713   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:34.409155   60933 cri.go:89] found id: ""
	I1216 21:03:34.409181   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.409189   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:34.409195   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:34.409256   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:34.448555   60933 cri.go:89] found id: ""
	I1216 21:03:34.448583   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.448594   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:34.448601   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:34.448663   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:34.486800   60933 cri.go:89] found id: ""
	I1216 21:03:34.486829   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.486842   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:34.486851   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:34.486919   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:34.530274   60933 cri.go:89] found id: ""
	I1216 21:03:34.530299   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.530307   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:34.530317   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:34.530335   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:34.601587   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:34.601620   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:34.601637   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:34.680215   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:34.680250   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:34.721362   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:34.721389   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:34.776652   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:34.776693   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:37.292877   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:37.306976   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:37.307060   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:37.349370   60933 cri.go:89] found id: ""
	I1216 21:03:37.349405   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.349416   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:37.349424   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:37.349486   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:37.387213   60933 cri.go:89] found id: ""
	I1216 21:03:37.387271   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.387285   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:37.387294   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:37.387361   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:37.427138   60933 cri.go:89] found id: ""
	I1216 21:03:37.427164   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.427175   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:37.427182   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:37.427269   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:37.466751   60933 cri.go:89] found id: ""
	I1216 21:03:37.466776   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.466783   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:37.466788   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:37.466846   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:37.505078   60933 cri.go:89] found id: ""
	I1216 21:03:37.505115   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.505123   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:37.505128   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:37.505189   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:37.548642   60933 cri.go:89] found id: ""
	I1216 21:03:37.548665   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.548673   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:37.548679   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:37.548738   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:37.592354   60933 cri.go:89] found id: ""
	I1216 21:03:37.592379   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.592386   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:37.592391   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:37.592441   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:37.631179   60933 cri.go:89] found id: ""
	I1216 21:03:37.631212   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.631221   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:37.631230   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:37.631261   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:37.683021   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:37.683062   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:37.698056   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:37.698087   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:37.774368   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:37.774397   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:37.774422   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:37.860470   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:37.860511   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:35.936409   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:37.936652   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:36.319436   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:38.819727   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:38.456251   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:40.957354   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:40.405278   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:40.420390   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:40.420473   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:40.463963   60933 cri.go:89] found id: ""
	I1216 21:03:40.463994   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.464033   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:40.464041   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:40.464107   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:40.510321   60933 cri.go:89] found id: ""
	I1216 21:03:40.510352   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.510369   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:40.510376   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:40.510441   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:40.546580   60933 cri.go:89] found id: ""
	I1216 21:03:40.546610   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.546619   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:40.546624   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:40.546686   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:40.583109   60933 cri.go:89] found id: ""
	I1216 21:03:40.583136   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.583144   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:40.583149   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:40.583202   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:40.628747   60933 cri.go:89] found id: ""
	I1216 21:03:40.628771   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.628778   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:40.628784   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:40.628845   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:40.663757   60933 cri.go:89] found id: ""
	I1216 21:03:40.663785   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.663796   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:40.663804   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:40.663867   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:40.703483   60933 cri.go:89] found id: ""
	I1216 21:03:40.703513   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.703522   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:40.703528   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:40.703592   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:40.742585   60933 cri.go:89] found id: ""
	I1216 21:03:40.742622   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.742632   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:40.742641   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:40.742653   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:40.757771   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:40.757809   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:40.837615   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:40.837642   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:40.837656   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:40.915403   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:40.915442   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:40.960762   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:40.960790   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:43.515302   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:43.530831   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:43.530906   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:43.571680   60933 cri.go:89] found id: ""
	I1216 21:03:43.571704   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.571712   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:43.571718   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:43.571779   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:43.615912   60933 cri.go:89] found id: ""
	I1216 21:03:43.615940   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.615948   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:43.615955   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:43.616013   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:43.654206   60933 cri.go:89] found id: ""
	I1216 21:03:43.654231   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.654241   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:43.654249   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:43.654309   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:43.690509   60933 cri.go:89] found id: ""
	I1216 21:03:43.690533   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.690541   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:43.690548   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:43.690595   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:43.728601   60933 cri.go:89] found id: ""
	I1216 21:03:43.728627   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.728634   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:43.728639   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:43.728685   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:43.769092   60933 cri.go:89] found id: ""
	I1216 21:03:43.769130   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.769198   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:43.769215   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:43.769292   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:43.812492   60933 cri.go:89] found id: ""
	I1216 21:03:43.812525   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.812537   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:43.812544   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:43.812613   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:43.852748   60933 cri.go:89] found id: ""
	I1216 21:03:43.852778   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.852787   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:43.852795   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:43.852807   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:43.907800   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:43.907839   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:43.922806   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:43.922833   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:44.002511   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:44.002538   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:44.002551   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:44.081760   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:44.081801   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:40.437134   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:42.437214   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:40.820244   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:43.321298   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:43.455891   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:45.456281   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:46.625868   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:46.640266   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:46.640341   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:46.677137   60933 cri.go:89] found id: ""
	I1216 21:03:46.677168   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.677179   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:46.677185   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:46.677241   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:46.714340   60933 cri.go:89] found id: ""
	I1216 21:03:46.714373   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.714382   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:46.714389   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:46.714449   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:46.752713   60933 cri.go:89] found id: ""
	I1216 21:03:46.752743   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.752754   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:46.752763   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:46.752827   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:46.790787   60933 cri.go:89] found id: ""
	I1216 21:03:46.790821   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.790837   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:46.790845   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:46.790902   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:46.827905   60933 cri.go:89] found id: ""
	I1216 21:03:46.827934   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.827945   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:46.827954   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:46.828023   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:46.863522   60933 cri.go:89] found id: ""
	I1216 21:03:46.863547   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.863563   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:46.863570   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:46.863634   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:46.906005   60933 cri.go:89] found id: ""
	I1216 21:03:46.906035   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.906044   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:46.906049   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:46.906103   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:46.947639   60933 cri.go:89] found id: ""
	I1216 21:03:46.947668   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.947679   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:46.947691   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:46.947706   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:47.001693   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:47.001732   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:47.023122   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:47.023166   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:47.108257   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:47.108291   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:47.108303   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:47.184768   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:47.184807   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:44.940074   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:47.437155   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:45.819943   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:47.820443   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:49.820700   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:47.955794   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:49.960595   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:52.455630   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:49.729433   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:49.743836   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:49.743903   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:49.783021   60933 cri.go:89] found id: ""
	I1216 21:03:49.783054   60933 logs.go:282] 0 containers: []
	W1216 21:03:49.783066   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:49.783074   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:49.783144   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:49.820371   60933 cri.go:89] found id: ""
	I1216 21:03:49.820399   60933 logs.go:282] 0 containers: []
	W1216 21:03:49.820409   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:49.820416   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:49.820476   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:49.857918   60933 cri.go:89] found id: ""
	I1216 21:03:49.857948   60933 logs.go:282] 0 containers: []
	W1216 21:03:49.857959   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:49.857967   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:49.858033   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:49.899517   60933 cri.go:89] found id: ""
	I1216 21:03:49.899548   60933 logs.go:282] 0 containers: []
	W1216 21:03:49.899558   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:49.899565   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:49.899632   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:49.938771   60933 cri.go:89] found id: ""
	I1216 21:03:49.938797   60933 logs.go:282] 0 containers: []
	W1216 21:03:49.938805   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:49.938810   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:49.938857   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:49.975748   60933 cri.go:89] found id: ""
	I1216 21:03:49.975781   60933 logs.go:282] 0 containers: []
	W1216 21:03:49.975792   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:49.975800   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:49.975876   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:50.013057   60933 cri.go:89] found id: ""
	I1216 21:03:50.013082   60933 logs.go:282] 0 containers: []
	W1216 21:03:50.013090   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:50.013127   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:50.013178   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:50.049106   60933 cri.go:89] found id: ""
	I1216 21:03:50.049138   60933 logs.go:282] 0 containers: []
	W1216 21:03:50.049150   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:50.049161   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:50.049176   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:50.063815   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:50.063847   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:50.137801   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:50.137826   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:50.137841   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:50.218456   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:50.218495   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:50.263347   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:50.263379   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:52.824077   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:52.838096   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:52.838185   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:52.880550   60933 cri.go:89] found id: ""
	I1216 21:03:52.880582   60933 logs.go:282] 0 containers: []
	W1216 21:03:52.880593   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:52.880600   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:52.880658   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:52.919728   60933 cri.go:89] found id: ""
	I1216 21:03:52.919751   60933 logs.go:282] 0 containers: []
	W1216 21:03:52.919759   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:52.919764   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:52.919819   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:52.957519   60933 cri.go:89] found id: ""
	I1216 21:03:52.957542   60933 logs.go:282] 0 containers: []
	W1216 21:03:52.957549   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:52.957555   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:52.957607   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:52.996631   60933 cri.go:89] found id: ""
	I1216 21:03:52.996663   60933 logs.go:282] 0 containers: []
	W1216 21:03:52.996673   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:52.996681   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:52.996745   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:53.059902   60933 cri.go:89] found id: ""
	I1216 21:03:53.060014   60933 logs.go:282] 0 containers: []
	W1216 21:03:53.060030   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:53.060039   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:53.060105   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:53.099367   60933 cri.go:89] found id: ""
	I1216 21:03:53.099395   60933 logs.go:282] 0 containers: []
	W1216 21:03:53.099406   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:53.099419   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:53.099486   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:53.140668   60933 cri.go:89] found id: ""
	I1216 21:03:53.140696   60933 logs.go:282] 0 containers: []
	W1216 21:03:53.140704   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:53.140709   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:53.140777   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:53.179182   60933 cri.go:89] found id: ""
	I1216 21:03:53.179208   60933 logs.go:282] 0 containers: []
	W1216 21:03:53.179216   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:53.179225   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:53.179236   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:53.233441   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:53.233481   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:53.247526   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:53.247569   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:53.321868   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:53.321895   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:53.321911   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:53.410904   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:53.410959   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:49.936523   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:51.936955   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:54.441538   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:52.319658   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:54.319887   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:54.955490   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:57.456080   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:55.954371   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:55.968506   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:55.968570   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:56.005087   60933 cri.go:89] found id: ""
	I1216 21:03:56.005118   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.005130   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:56.005137   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:56.005205   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:56.039443   60933 cri.go:89] found id: ""
	I1216 21:03:56.039467   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.039475   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:56.039486   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:56.039537   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:56.078181   60933 cri.go:89] found id: ""
	I1216 21:03:56.078213   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.078224   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:56.078231   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:56.078289   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:56.115809   60933 cri.go:89] found id: ""
	I1216 21:03:56.115833   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.115841   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:56.115848   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:56.115901   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:56.154299   60933 cri.go:89] found id: ""
	I1216 21:03:56.154323   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.154330   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:56.154336   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:56.154395   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:56.193069   60933 cri.go:89] found id: ""
	I1216 21:03:56.193098   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.193106   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:56.193112   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:56.193161   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:56.231067   60933 cri.go:89] found id: ""
	I1216 21:03:56.231099   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.231118   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:56.231125   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:56.231191   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:56.270980   60933 cri.go:89] found id: ""
	I1216 21:03:56.271011   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.271022   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:56.271035   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:56.271050   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:56.321374   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:56.321405   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:56.336802   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:56.336847   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:56.414052   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:56.414078   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:56.414091   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:56.499118   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:56.499158   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:59.049386   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:59.063191   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:59.063300   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:59.102136   60933 cri.go:89] found id: ""
	I1216 21:03:59.102169   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.102180   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:59.102187   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:59.102255   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:59.138311   60933 cri.go:89] found id: ""
	I1216 21:03:59.138340   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.138357   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:59.138364   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:59.138431   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:59.176131   60933 cri.go:89] found id: ""
	I1216 21:03:59.176159   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.176169   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:59.176177   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:59.176259   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:59.214274   60933 cri.go:89] found id: ""
	I1216 21:03:59.214308   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.214320   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:59.214327   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:59.214397   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:59.254499   60933 cri.go:89] found id: ""
	I1216 21:03:59.254524   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.254531   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:59.254537   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:59.254602   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:59.292715   60933 cri.go:89] found id: ""
	I1216 21:03:59.292755   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.292765   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:59.292772   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:59.292836   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:59.333279   60933 cri.go:89] found id: ""
	I1216 21:03:59.333314   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.333325   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:59.333332   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:59.333404   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:59.372071   60933 cri.go:89] found id: ""
	I1216 21:03:59.372104   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.372116   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:59.372126   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:59.372143   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:59.389021   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:59.389051   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 21:03:56.936508   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:59.438217   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:56.323300   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:58.819599   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:59.456242   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:01.956873   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	W1216 21:03:59.503281   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:59.503304   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:59.503316   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:59.581761   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:59.581797   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:59.627604   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:59.627646   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:04:02.179425   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:04:02.195786   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:04:02.195850   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:04:02.239763   60933 cri.go:89] found id: ""
	I1216 21:04:02.239790   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.239801   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:04:02.239809   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:04:02.239873   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:04:02.278885   60933 cri.go:89] found id: ""
	I1216 21:04:02.278914   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.278926   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:04:02.278935   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:04:02.279004   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:04:02.320701   60933 cri.go:89] found id: ""
	I1216 21:04:02.320731   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.320742   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:04:02.320749   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:04:02.320811   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:04:02.357726   60933 cri.go:89] found id: ""
	I1216 21:04:02.357757   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.357767   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:04:02.357773   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:04:02.357826   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:04:02.399577   60933 cri.go:89] found id: ""
	I1216 21:04:02.399609   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.399618   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:04:02.399624   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:04:02.399687   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:04:02.445559   60933 cri.go:89] found id: ""
	I1216 21:04:02.445590   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.445600   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:04:02.445607   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:04:02.445670   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:04:02.482983   60933 cri.go:89] found id: ""
	I1216 21:04:02.483015   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.483027   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:04:02.483035   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:04:02.483116   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:04:02.523028   60933 cri.go:89] found id: ""
	I1216 21:04:02.523055   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.523063   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:04:02.523073   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:04:02.523084   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:04:02.577447   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:04:02.577487   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:04:02.594539   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:04:02.594567   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:04:02.683805   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:04:02.683832   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:04:02.683848   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:04:02.763377   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:04:02.763416   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:04:01.937214   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:04.436771   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:01.319860   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:03.320323   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:04.454654   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:06.456145   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:05.311029   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:04:05.328358   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:04:05.328438   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:04:05.367378   60933 cri.go:89] found id: ""
	I1216 21:04:05.367402   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.367409   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:04:05.367419   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:04:05.367468   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:04:05.406268   60933 cri.go:89] found id: ""
	I1216 21:04:05.406291   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.406301   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:04:05.406306   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:04:05.406353   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:04:05.444737   60933 cri.go:89] found id: ""
	I1216 21:04:05.444767   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.444778   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:04:05.444787   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:04:05.444836   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:04:05.484044   60933 cri.go:89] found id: ""
	I1216 21:04:05.484132   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.484153   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:04:05.484161   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:04:05.484222   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:04:05.523395   60933 cri.go:89] found id: ""
	I1216 21:04:05.523420   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.523431   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:04:05.523439   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:04:05.523501   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:04:05.566925   60933 cri.go:89] found id: ""
	I1216 21:04:05.566954   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.566967   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:04:05.566974   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:04:05.567036   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:04:05.611275   60933 cri.go:89] found id: ""
	I1216 21:04:05.611303   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.611314   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:04:05.611321   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:04:05.611396   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:04:05.650340   60933 cri.go:89] found id: ""
	I1216 21:04:05.650371   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.650379   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:04:05.650389   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:04:05.650400   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:04:05.702277   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:04:05.702321   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:04:05.718685   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:04:05.718713   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:04:05.794979   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:04:05.795005   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:04:05.795020   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:04:05.897348   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:04:05.897383   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:04:08.447268   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:04:08.462553   60933 kubeadm.go:597] duration metric: took 4m2.545161532s to restartPrimaryControlPlane
	W1216 21:04:08.462621   60933 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 21:04:08.462650   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 21:04:06.437699   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:08.936904   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:05.813413   60829 pod_ready.go:82] duration metric: took 4m0.000648161s for pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace to be "Ready" ...
	E1216 21:04:05.813448   60829 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace to be "Ready" (will not retry!)
	I1216 21:04:05.813472   60829 pod_ready.go:39] duration metric: took 4m14.577422135s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:04:05.813498   60829 kubeadm.go:597] duration metric: took 4m22.010606819s to restartPrimaryControlPlane
	W1216 21:04:05.813559   60829 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 21:04:05.813593   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 21:04:10.315541   60933 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.85286561s)
	I1216 21:04:10.315622   60933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:04:10.330937   60933 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 21:04:10.343702   60933 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:04:10.356498   60933 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:04:10.356526   60933 kubeadm.go:157] found existing configuration files:
	
	I1216 21:04:10.356579   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 21:04:10.367777   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:04:10.367847   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:04:10.379109   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 21:04:10.389258   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:04:10.389313   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:04:10.399959   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 21:04:10.410664   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:04:10.410734   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:04:10.423138   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 21:04:10.433922   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:04:10.433976   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:04:10.445297   60933 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 21:04:10.524236   60933 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1216 21:04:10.524344   60933 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 21:04:10.680331   60933 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 21:04:10.680489   60933 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 21:04:10.680641   60933 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1216 21:04:10.877305   60933 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 21:04:10.879375   60933 out.go:235]   - Generating certificates and keys ...
	I1216 21:04:10.879496   60933 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 21:04:10.879567   60933 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 21:04:10.879647   60933 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 21:04:10.879748   60933 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 21:04:10.879865   60933 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 21:04:10.880127   60933 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 21:04:10.881047   60933 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 21:04:10.881874   60933 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 21:04:10.882778   60933 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 21:04:10.883678   60933 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 21:04:10.884029   60933 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 21:04:10.884130   60933 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 21:04:11.034011   60933 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 21:04:11.273509   60933 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 21:04:11.477553   60933 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 21:04:11.542158   60933 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 21:04:11.565791   60933 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 21:04:11.567317   60933 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 21:04:11.567409   60933 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 21:04:11.763223   60933 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 21:04:08.955135   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:10.957061   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:11.766107   60933 out.go:235]   - Booting up control plane ...
	I1216 21:04:11.766257   60933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 21:04:11.766367   60933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 21:04:11.768484   60933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 21:04:11.773601   60933 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 21:04:11.780554   60933 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1216 21:04:11.436931   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:13.437532   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:13.455175   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:15.455370   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:17.456801   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:15.936107   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:17.937233   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:17.949449   60421 pod_ready.go:82] duration metric: took 4m0.000885381s for pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace to be "Ready" ...
	E1216 21:04:17.949484   60421 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace to be "Ready" (will not retry!)
	I1216 21:04:17.949501   60421 pod_ready.go:39] duration metric: took 4m10.554596731s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:04:17.949525   60421 kubeadm.go:597] duration metric: took 4m42.414672113s to restartPrimaryControlPlane
	W1216 21:04:17.949588   60421 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 21:04:17.949619   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 21:04:19.938104   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:22.436710   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:24.936550   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:26.936809   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:29.437478   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:33.833179   60829 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (28.019561403s)
	I1216 21:04:33.833265   60829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:04:33.850170   60829 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 21:04:33.862112   60829 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:04:33.873752   60829 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:04:33.873777   60829 kubeadm.go:157] found existing configuration files:
	
	I1216 21:04:33.873832   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1216 21:04:33.885038   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:04:33.885115   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:04:33.897352   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1216 21:04:33.911055   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:04:33.911115   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:04:33.925077   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1216 21:04:33.938925   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:04:33.938997   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:04:33.952022   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1216 21:04:33.963099   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:04:33.963176   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:04:33.974080   60829 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 21:04:34.031525   60829 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I1216 21:04:34.031643   60829 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 21:04:34.153173   60829 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 21:04:34.153340   60829 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 21:04:34.153453   60829 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 21:04:34.166258   60829 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 21:04:31.936620   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:33.938157   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:34.168275   60829 out.go:235]   - Generating certificates and keys ...
	I1216 21:04:34.168388   60829 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 21:04:34.168463   60829 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 21:04:34.168545   60829 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 21:04:34.168633   60829 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 21:04:34.168740   60829 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 21:04:34.168837   60829 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 21:04:34.168934   60829 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 21:04:34.169020   60829 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 21:04:34.169119   60829 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 21:04:34.169222   60829 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 21:04:34.169278   60829 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 21:04:34.169365   60829 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 21:04:34.277660   60829 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 21:04:34.526364   60829 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 21:04:34.629728   60829 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 21:04:34.757824   60829 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 21:04:34.838922   60829 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 21:04:34.839431   60829 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 21:04:34.841925   60829 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 21:04:34.843761   60829 out.go:235]   - Booting up control plane ...
	I1216 21:04:34.843874   60829 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 21:04:34.843945   60829 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 21:04:34.846919   60829 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 21:04:34.866038   60829 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 21:04:34.875031   60829 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 21:04:34.875112   60829 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 21:04:35.016713   60829 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 21:04:35.016879   60829 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 21:04:36.437043   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:38.437584   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:36.017947   60829 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001159452s
	I1216 21:04:36.018086   60829 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1216 21:04:40.519460   60829 kubeadm.go:310] [api-check] The API server is healthy after 4.501460025s
	I1216 21:04:40.533680   60829 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 21:04:40.552611   60829 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 21:04:40.585691   60829 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 21:04:40.585905   60829 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-327790 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 21:04:40.613752   60829 kubeadm.go:310] [bootstrap-token] Using token: w829op.p4bszg1q76emsxit
	I1216 21:04:40.615428   60829 out.go:235]   - Configuring RBAC rules ...
	I1216 21:04:40.615556   60829 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 21:04:40.629296   60829 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 21:04:40.638449   60829 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 21:04:40.644143   60829 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 21:04:40.648665   60829 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 21:04:40.653151   60829 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 21:04:40.926399   60829 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 21:04:41.370569   60829 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1216 21:04:41.927555   60829 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1216 21:04:41.928692   60829 kubeadm.go:310] 
	I1216 21:04:41.928769   60829 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1216 21:04:41.928779   60829 kubeadm.go:310] 
	I1216 21:04:41.928851   60829 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1216 21:04:41.928878   60829 kubeadm.go:310] 
	I1216 21:04:41.928928   60829 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1216 21:04:41.929005   60829 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 21:04:41.929053   60829 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 21:04:41.929060   60829 kubeadm.go:310] 
	I1216 21:04:41.929107   60829 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1216 21:04:41.929114   60829 kubeadm.go:310] 
	I1216 21:04:41.929151   60829 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 21:04:41.929157   60829 kubeadm.go:310] 
	I1216 21:04:41.929205   60829 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1216 21:04:41.929264   60829 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 21:04:41.929325   60829 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 21:04:41.929354   60829 kubeadm.go:310] 
	I1216 21:04:41.929527   60829 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 21:04:41.929657   60829 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1216 21:04:41.929674   60829 kubeadm.go:310] 
	I1216 21:04:41.929787   60829 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token w829op.p4bszg1q76emsxit \
	I1216 21:04:41.929941   60829 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e03b60b144334bf383a3d22daeca854a6b4004373f1847ba3afcb85a998b5735 \
	I1216 21:04:41.929975   60829 kubeadm.go:310] 	--control-plane 
	I1216 21:04:41.929984   60829 kubeadm.go:310] 
	I1216 21:04:41.930103   60829 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1216 21:04:41.930124   60829 kubeadm.go:310] 
	I1216 21:04:41.930245   60829 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token w829op.p4bszg1q76emsxit \
	I1216 21:04:41.930378   60829 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e03b60b144334bf383a3d22daeca854a6b4004373f1847ba3afcb85a998b5735 
	I1216 21:04:41.931554   60829 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 21:04:41.931685   60829 cni.go:84] Creating CNI manager for ""
	I1216 21:04:41.931699   60829 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:04:41.933748   60829 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 21:04:40.937882   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:43.436864   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:41.935317   60829 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 21:04:41.947502   60829 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 21:04:41.976180   60829 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 21:04:41.976288   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:41.976323   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-327790 minikube.k8s.io/updated_at=2024_12_16T21_04_41_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=74e51ab701402ddc00f8ba70f2a2775c7dcd6477 minikube.k8s.io/name=default-k8s-diff-port-327790 minikube.k8s.io/primary=true
	I1216 21:04:42.010154   60829 ops.go:34] apiserver oom_adj: -16
	I1216 21:04:42.181919   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:42.682201   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:43.182557   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:43.682418   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:44.182318   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:44.682793   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:45.182342   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:45.682678   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:45.777484   60829 kubeadm.go:1113] duration metric: took 3.801254961s to wait for elevateKubeSystemPrivileges
	I1216 21:04:45.777522   60829 kubeadm.go:394] duration metric: took 5m2.030533321s to StartCluster
	I1216 21:04:45.777543   60829 settings.go:142] acquiring lock: {Name:mke62e1d1fa6bfae09410847a3fc6f95d0bbbd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:04:45.777644   60829 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 21:04:45.780034   60829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/kubeconfig: {Name:mk67073c6dc9abd712825d4490d6430745897f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:04:45.780369   60829 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.162 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 21:04:45.780450   60829 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 21:04:45.780566   60829 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-327790"
	I1216 21:04:45.780579   60829 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-327790"
	I1216 21:04:45.780595   60829 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-327790"
	W1216 21:04:45.780606   60829 addons.go:243] addon storage-provisioner should already be in state true
	I1216 21:04:45.780599   60829 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-327790"
	I1216 21:04:45.780609   60829 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-327790"
	I1216 21:04:45.780627   60829 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-327790"
	I1216 21:04:45.780627   60829 config.go:182] Loaded profile config "default-k8s-diff-port-327790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	W1216 21:04:45.780638   60829 addons.go:243] addon metrics-server should already be in state true
	I1216 21:04:45.780648   60829 host.go:66] Checking if "default-k8s-diff-port-327790" exists ...
	I1216 21:04:45.780675   60829 host.go:66] Checking if "default-k8s-diff-port-327790" exists ...
	I1216 21:04:45.781091   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:45.781091   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:45.781132   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:45.781136   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:45.781174   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:45.781137   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:45.782022   60829 out.go:177] * Verifying Kubernetes components...
	I1216 21:04:45.783549   60829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 21:04:45.799326   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42295
	I1216 21:04:45.799443   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36833
	I1216 21:04:45.799865   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:45.800491   60829 main.go:141] libmachine: Using API Version  1
	I1216 21:04:45.800510   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:45.800588   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:45.801082   60829 main.go:141] libmachine: Using API Version  1
	I1216 21:04:45.801102   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:45.801178   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37413
	I1216 21:04:45.801202   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:45.801517   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:45.801539   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:45.801707   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetState
	I1216 21:04:45.801925   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:45.801959   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:45.801974   60829 main.go:141] libmachine: Using API Version  1
	I1216 21:04:45.801992   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:45.802319   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:45.802817   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:45.802857   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:45.805750   60829 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-327790"
	W1216 21:04:45.805775   60829 addons.go:243] addon default-storageclass should already be in state true
	I1216 21:04:45.805806   60829 host.go:66] Checking if "default-k8s-diff-port-327790" exists ...
	I1216 21:04:45.806153   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:45.806193   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:45.820545   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46261
	I1216 21:04:45.821062   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:45.821598   60829 main.go:141] libmachine: Using API Version  1
	I1216 21:04:45.821625   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:45.822086   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:45.822294   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetState
	I1216 21:04:45.823995   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 21:04:45.824775   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40091
	I1216 21:04:45.825269   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:45.825754   60829 main.go:141] libmachine: Using API Version  1
	I1216 21:04:45.825778   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:45.826117   60829 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1216 21:04:45.826158   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:45.826843   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:45.826892   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:45.827527   60829 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1216 21:04:45.827557   60829 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1216 21:04:45.827577   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 21:04:45.829352   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34899
	I1216 21:04:45.829769   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:45.830197   60829 main.go:141] libmachine: Using API Version  1
	I1216 21:04:45.830217   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:45.830543   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:45.830767   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetState
	I1216 21:04:45.831413   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 21:04:45.832010   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 21:04:45.832030   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 21:04:45.832202   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 21:04:45.832424   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 21:04:45.832496   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 21:04:45.832847   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 21:04:45.833056   60829 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 21:04:45.834475   60829 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 21:04:45.835944   60829 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 21:04:45.835965   60829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 21:04:45.835983   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 21:04:45.839118   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 21:04:45.839533   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 21:04:45.839560   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 21:04:45.839744   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 21:04:45.839947   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 21:04:45.840087   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 21:04:45.840218   60829 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 21:04:45.845365   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37995
	I1216 21:04:45.845925   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:45.847042   60829 main.go:141] libmachine: Using API Version  1
	I1216 21:04:45.847060   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:45.847450   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:45.847669   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetState
	I1216 21:04:45.849934   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 21:04:45.850165   60829 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 21:04:45.850182   60829 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 21:04:45.850199   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 21:04:45.853083   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 21:04:45.853493   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 21:04:45.853518   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 21:04:45.853679   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 21:04:45.853848   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 21:04:45.854024   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 21:04:45.854177   60829 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 21:04:45.978935   60829 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 21:04:46.010601   60829 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-327790" to be "Ready" ...
	I1216 21:04:46.019674   60829 node_ready.go:49] node "default-k8s-diff-port-327790" has status "Ready":"True"
	I1216 21:04:46.019704   60829 node_ready.go:38] duration metric: took 9.066576ms for node "default-k8s-diff-port-327790" to be "Ready" ...
	I1216 21:04:46.019715   60829 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:04:46.033957   60829 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:46.103779   60829 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1216 21:04:46.103812   60829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1216 21:04:46.120299   60829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 21:04:46.171131   60829 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1216 21:04:46.171171   60829 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1216 21:04:46.171280   60829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 21:04:46.244556   60829 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 21:04:46.244587   60829 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1216 21:04:46.332646   60829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 21:04:47.461793   60829 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.34145582s)
	I1216 21:04:47.461871   60829 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.129193295s)
	I1216 21:04:47.461793   60829 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.290486436s)
	I1216 21:04:47.461899   60829 main.go:141] libmachine: Making call to close driver server
	I1216 21:04:47.461913   60829 main.go:141] libmachine: Making call to close driver server
	I1216 21:04:47.461918   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Close
	I1216 21:04:47.461875   60829 main.go:141] libmachine: Making call to close driver server
	I1216 21:04:47.461982   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Close
	I1216 21:04:47.461927   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Close
	I1216 21:04:47.462463   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Closing plugin on server side
	I1216 21:04:47.462469   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Closing plugin on server side
	I1216 21:04:47.462480   60829 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:04:47.462488   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Closing plugin on server side
	I1216 21:04:47.462494   60829 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:04:47.462504   60829 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:04:47.462506   60829 main.go:141] libmachine: Making call to close driver server
	I1216 21:04:47.462511   60829 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:04:47.462516   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Close
	I1216 21:04:47.462521   60829 main.go:141] libmachine: Making call to close driver server
	I1216 21:04:47.462529   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Close
	I1216 21:04:47.462556   60829 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:04:47.462573   60829 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:04:47.462581   60829 main.go:141] libmachine: Making call to close driver server
	I1216 21:04:47.462588   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Close
	I1216 21:04:47.462805   60829 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:04:47.462816   60829 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:04:47.462816   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Closing plugin on server side
	I1216 21:04:47.462827   60829 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-327790"
	I1216 21:04:47.462841   60829 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:04:47.462848   60829 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:04:47.463049   60829 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:04:47.463067   60829 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:04:47.524466   60829 main.go:141] libmachine: Making call to close driver server
	I1216 21:04:47.524497   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Close
	I1216 21:04:47.524822   60829 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:04:47.524843   60829 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:04:47.524869   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Closing plugin on server side
	I1216 21:04:47.526679   60829 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I1216 21:04:45.861404   60421 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.911759863s)
	I1216 21:04:45.861483   60421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:04:45.889560   60421 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 21:04:45.922090   60421 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:04:45.945227   60421 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:04:45.945261   60421 kubeadm.go:157] found existing configuration files:
	
	I1216 21:04:45.945306   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 21:04:45.960594   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:04:45.960666   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:04:45.980613   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 21:04:46.005349   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:04:46.005431   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:04:46.021683   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 21:04:46.032967   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:04:46.033042   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:04:46.064718   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 21:04:46.078736   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:04:46.078805   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:04:46.092798   60421 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 21:04:46.293434   60421 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 21:04:45.430910   60215 pod_ready.go:82] duration metric: took 4m0.000948437s for pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace to be "Ready" ...
	E1216 21:04:45.430950   60215 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace to be "Ready" (will not retry!)
	I1216 21:04:45.430970   60215 pod_ready.go:39] duration metric: took 4m12.926677248s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:04:45.431002   60215 kubeadm.go:597] duration metric: took 4m20.847109652s to restartPrimaryControlPlane
	W1216 21:04:45.431059   60215 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 21:04:45.431092   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 21:04:47.527909   60829 addons.go:510] duration metric: took 1.747463467s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I1216 21:04:48.047956   60829 pod_ready.go:103] pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:51.781856   60933 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1216 21:04:51.782285   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:04:51.782543   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:04:54.704462   60421 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I1216 21:04:54.704514   60421 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 21:04:54.704600   60421 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 21:04:54.704736   60421 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 21:04:54.704839   60421 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 21:04:54.704894   60421 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 21:04:54.706650   60421 out.go:235]   - Generating certificates and keys ...
	I1216 21:04:54.706771   60421 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 21:04:54.706865   60421 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 21:04:54.706999   60421 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 21:04:54.707113   60421 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 21:04:54.707256   60421 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 21:04:54.707344   60421 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 21:04:54.707478   60421 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 21:04:54.707573   60421 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 21:04:54.707683   60421 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 21:04:54.707806   60421 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 21:04:54.707851   60421 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 21:04:54.707902   60421 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 21:04:54.707968   60421 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 21:04:54.708056   60421 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 21:04:54.708127   60421 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 21:04:54.708225   60421 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 21:04:54.708305   60421 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 21:04:54.708427   60421 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 21:04:54.708526   60421 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 21:04:54.710014   60421 out.go:235]   - Booting up control plane ...
	I1216 21:04:54.710113   60421 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 21:04:54.710197   60421 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 21:04:54.710254   60421 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 21:04:54.710361   60421 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 21:04:54.710457   60421 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 21:04:54.710511   60421 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 21:04:54.710670   60421 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 21:04:54.710792   60421 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 21:04:54.710852   60421 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.532878ms
	I1216 21:04:54.710912   60421 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1216 21:04:54.710982   60421 kubeadm.go:310] [api-check] The API server is healthy after 5.50189872s
	I1216 21:04:54.711125   60421 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 21:04:54.711281   60421 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 21:04:54.711362   60421 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 21:04:54.711618   60421 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-232338 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 21:04:54.711712   60421 kubeadm.go:310] [bootstrap-token] Using token: knn1cl.i9horbjuutctjfyf
	I1216 21:04:54.714363   60421 out.go:235]   - Configuring RBAC rules ...
	I1216 21:04:54.714488   60421 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 21:04:54.714560   60421 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 21:04:54.714674   60421 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 21:04:54.714820   60421 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 21:04:54.714914   60421 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 21:04:54.714981   60421 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 21:04:54.715083   60421 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 21:04:54.715159   60421 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1216 21:04:54.715228   60421 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1216 21:04:54.715237   60421 kubeadm.go:310] 
	I1216 21:04:54.715345   60421 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1216 21:04:54.715359   60421 kubeadm.go:310] 
	I1216 21:04:54.715455   60421 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1216 21:04:54.715463   60421 kubeadm.go:310] 
	I1216 21:04:54.715510   60421 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1216 21:04:54.715596   60421 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 21:04:54.715669   60421 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 21:04:54.715679   60421 kubeadm.go:310] 
	I1216 21:04:54.715767   60421 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1216 21:04:54.715775   60421 kubeadm.go:310] 
	I1216 21:04:54.715842   60421 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 21:04:54.715851   60421 kubeadm.go:310] 
	I1216 21:04:54.715908   60421 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1216 21:04:54.715969   60421 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 21:04:54.716026   60421 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 21:04:54.716032   60421 kubeadm.go:310] 
	I1216 21:04:54.716106   60421 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 21:04:54.716171   60421 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1216 21:04:54.716177   60421 kubeadm.go:310] 
	I1216 21:04:54.716240   60421 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token knn1cl.i9horbjuutctjfyf \
	I1216 21:04:54.716340   60421 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e03b60b144334bf383a3d22daeca854a6b4004373f1847ba3afcb85a998b5735 \
	I1216 21:04:54.716374   60421 kubeadm.go:310] 	--control-plane 
	I1216 21:04:54.716384   60421 kubeadm.go:310] 
	I1216 21:04:54.716457   60421 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1216 21:04:54.716467   60421 kubeadm.go:310] 
	I1216 21:04:54.716534   60421 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token knn1cl.i9horbjuutctjfyf \
	I1216 21:04:54.716634   60421 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e03b60b144334bf383a3d22daeca854a6b4004373f1847ba3afcb85a998b5735 
	I1216 21:04:54.716644   60421 cni.go:84] Creating CNI manager for ""
	I1216 21:04:54.716651   60421 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:04:54.718260   60421 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 21:04:50.542207   60829 pod_ready.go:103] pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:52.542453   60829 pod_ready.go:103] pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:55.040960   60829 pod_ready.go:103] pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:56.042145   60829 pod_ready.go:93] pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:04:56.042175   60829 pod_ready.go:82] duration metric: took 10.008191514s for pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:56.042192   60829 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:56.047996   60829 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:04:56.048022   60829 pod_ready.go:82] duration metric: took 5.821217ms for pod "kube-apiserver-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:56.048031   60829 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:56.052582   60829 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:04:56.052608   60829 pod_ready.go:82] duration metric: took 4.569092ms for pod "kube-controller-manager-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:56.052619   60829 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:56.056805   60829 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:04:56.056834   60829 pod_ready.go:82] duration metric: took 4.206726ms for pod "kube-scheduler-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:56.056841   60829 pod_ready.go:39] duration metric: took 10.037112061s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:04:56.056855   60829 api_server.go:52] waiting for apiserver process to appear ...
	I1216 21:04:56.056904   60829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:04:56.076993   60829 api_server.go:72] duration metric: took 10.296578804s to wait for apiserver process to appear ...
	I1216 21:04:56.077023   60829 api_server.go:88] waiting for apiserver healthz status ...
	I1216 21:04:56.077045   60829 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1216 21:04:56.082250   60829 api_server.go:279] https://192.168.39.162:8444/healthz returned 200:
	ok
	I1216 21:04:56.083348   60829 api_server.go:141] control plane version: v1.32.0
	I1216 21:04:56.083369   60829 api_server.go:131] duration metric: took 6.339438ms to wait for apiserver health ...
	I1216 21:04:56.083377   60829 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 21:04:56.090255   60829 system_pods.go:59] 9 kube-system pods found
	I1216 21:04:56.090290   60829 system_pods.go:61] "coredns-668d6bf9bc-2qcfx" [4ac98efa-96ff-4564-93de-4a61de7a6507] Running
	I1216 21:04:56.090302   60829 system_pods.go:61] "coredns-668d6bf9bc-fb7wx" [f2f2c0e7-893f-45ba-8da9-3b03f5560d89] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:04:56.090310   60829 system_pods.go:61] "etcd-default-k8s-diff-port-327790" [5363e160-ef78-4737-89f9-5f4d0f0eab95] Running
	I1216 21:04:56.090318   60829 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-327790" [b53c6be6-e476-4a5a-80c2-96e701736820] Running
	I1216 21:04:56.090324   60829 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-327790" [57d8747a-7258-48c3-9bcd-6fedaa8b7431] Running
	I1216 21:04:56.090329   60829 system_pods.go:61] "kube-proxy-njqp8" [e5f1789d-b343-4c2e-b078-4a15f4b18569] Running
	I1216 21:04:56.090334   60829 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-327790" [e2303bbd-b9d9-4392-867f-6f5f43f74826] Running
	I1216 21:04:56.090342   60829 system_pods.go:61] "metrics-server-f79f97bbb-84xtf" [569c6717-dc12-474f-8156-d2dd9e410a54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:04:56.090349   60829 system_pods.go:61] "storage-provisioner" [4e5b12f0-3d96-4dd0-81e7-300b82058d47] Running
	I1216 21:04:56.090360   60829 system_pods.go:74] duration metric: took 6.975795ms to wait for pod list to return data ...
	I1216 21:04:56.090373   60829 default_sa.go:34] waiting for default service account to be created ...
	I1216 21:04:56.093967   60829 default_sa.go:45] found service account: "default"
	I1216 21:04:56.093998   60829 default_sa.go:55] duration metric: took 3.616693ms for default service account to be created ...
	I1216 21:04:56.094010   60829 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 21:04:56.241532   60829 system_pods.go:86] 9 kube-system pods found
	I1216 21:04:56.241568   60829 system_pods.go:89] "coredns-668d6bf9bc-2qcfx" [4ac98efa-96ff-4564-93de-4a61de7a6507] Running
	I1216 21:04:56.241582   60829 system_pods.go:89] "coredns-668d6bf9bc-fb7wx" [f2f2c0e7-893f-45ba-8da9-3b03f5560d89] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:04:56.241589   60829 system_pods.go:89] "etcd-default-k8s-diff-port-327790" [5363e160-ef78-4737-89f9-5f4d0f0eab95] Running
	I1216 21:04:56.241597   60829 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-327790" [b53c6be6-e476-4a5a-80c2-96e701736820] Running
	I1216 21:04:56.241605   60829 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-327790" [57d8747a-7258-48c3-9bcd-6fedaa8b7431] Running
	I1216 21:04:56.241611   60829 system_pods.go:89] "kube-proxy-njqp8" [e5f1789d-b343-4c2e-b078-4a15f4b18569] Running
	I1216 21:04:56.241617   60829 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-327790" [e2303bbd-b9d9-4392-867f-6f5f43f74826] Running
	I1216 21:04:56.241624   60829 system_pods.go:89] "metrics-server-f79f97bbb-84xtf" [569c6717-dc12-474f-8156-d2dd9e410a54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:04:56.241629   60829 system_pods.go:89] "storage-provisioner" [4e5b12f0-3d96-4dd0-81e7-300b82058d47] Running
	I1216 21:04:56.241639   60829 system_pods.go:126] duration metric: took 147.621114ms to wait for k8s-apps to be running ...
	I1216 21:04:56.241656   60829 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 21:04:56.241730   60829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:04:56.258891   60829 system_svc.go:56] duration metric: took 17.227056ms WaitForService to wait for kubelet
	I1216 21:04:56.258935   60829 kubeadm.go:582] duration metric: took 10.478521255s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 21:04:56.258962   60829 node_conditions.go:102] verifying NodePressure condition ...
	I1216 21:04:56.438641   60829 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 21:04:56.438667   60829 node_conditions.go:123] node cpu capacity is 2
	I1216 21:04:56.438679   60829 node_conditions.go:105] duration metric: took 179.711624ms to run NodePressure ...
	I1216 21:04:56.438692   60829 start.go:241] waiting for startup goroutines ...
	I1216 21:04:56.438700   60829 start.go:246] waiting for cluster config update ...
	I1216 21:04:56.438714   60829 start.go:255] writing updated cluster config ...
	I1216 21:04:56.438975   60829 ssh_runner.go:195] Run: rm -f paused
	I1216 21:04:56.490195   60829 start.go:600] kubectl: 1.32.0, cluster: 1.32.0 (minor skew: 0)
	I1216 21:04:56.492395   60829 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-327790" cluster and "default" namespace by default
	I1216 21:04:54.719483   60421 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 21:04:54.732035   60421 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 21:04:54.754010   60421 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 21:04:54.754122   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:54.754177   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-232338 minikube.k8s.io/updated_at=2024_12_16T21_04_54_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=74e51ab701402ddc00f8ba70f2a2775c7dcd6477 minikube.k8s.io/name=no-preload-232338 minikube.k8s.io/primary=true
	I1216 21:04:54.773008   60421 ops.go:34] apiserver oom_adj: -16
	I1216 21:04:55.009573   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:55.510039   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:56.009645   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:56.509608   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:57.009714   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:57.509902   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:58.009901   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:58.509631   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:58.632896   60421 kubeadm.go:1113] duration metric: took 3.878846316s to wait for elevateKubeSystemPrivileges
	I1216 21:04:58.632933   60421 kubeadm.go:394] duration metric: took 5m23.15322559s to StartCluster
	I1216 21:04:58.632951   60421 settings.go:142] acquiring lock: {Name:mke62e1d1fa6bfae09410847a3fc6f95d0bbbd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:04:58.633031   60421 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 21:04:58.635409   60421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/kubeconfig: {Name:mk67073c6dc9abd712825d4490d6430745897f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:04:58.635720   60421 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.240 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 21:04:58.635835   60421 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 21:04:58.635944   60421 addons.go:69] Setting storage-provisioner=true in profile "no-preload-232338"
	I1216 21:04:58.635958   60421 config.go:182] Loaded profile config "no-preload-232338": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 21:04:58.635966   60421 addons.go:234] Setting addon storage-provisioner=true in "no-preload-232338"
	I1216 21:04:58.635969   60421 addons.go:69] Setting default-storageclass=true in profile "no-preload-232338"
	W1216 21:04:58.635975   60421 addons.go:243] addon storage-provisioner should already be in state true
	I1216 21:04:58.635986   60421 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-232338"
	I1216 21:04:58.636005   60421 host.go:66] Checking if "no-preload-232338" exists ...
	I1216 21:04:58.635997   60421 addons.go:69] Setting metrics-server=true in profile "no-preload-232338"
	I1216 21:04:58.636029   60421 addons.go:234] Setting addon metrics-server=true in "no-preload-232338"
	W1216 21:04:58.636038   60421 addons.go:243] addon metrics-server should already be in state true
	I1216 21:04:58.636069   60421 host.go:66] Checking if "no-preload-232338" exists ...
	I1216 21:04:58.636428   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:58.636460   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:58.636428   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:58.636513   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:58.636532   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:58.636549   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:58.637558   60421 out.go:177] * Verifying Kubernetes components...
	I1216 21:04:58.639254   60421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 21:04:58.652770   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43305
	I1216 21:04:58.652789   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35093
	I1216 21:04:58.653247   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:58.653368   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:58.653818   60421 main.go:141] libmachine: Using API Version  1
	I1216 21:04:58.653836   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:58.653944   60421 main.go:141] libmachine: Using API Version  1
	I1216 21:04:58.653963   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:58.654562   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:58.654565   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:58.654775   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetState
	I1216 21:04:58.655078   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:58.655117   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:58.656383   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38087
	I1216 21:04:58.656987   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:58.657520   60421 main.go:141] libmachine: Using API Version  1
	I1216 21:04:58.657553   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:58.657933   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:58.658517   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:58.658566   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:58.658692   60421 addons.go:234] Setting addon default-storageclass=true in "no-preload-232338"
	W1216 21:04:58.658708   60421 addons.go:243] addon default-storageclass should already be in state true
	I1216 21:04:58.658737   60421 host.go:66] Checking if "no-preload-232338" exists ...
	I1216 21:04:58.659001   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:58.659043   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:58.672942   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34153
	I1216 21:04:58.673478   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:58.674034   60421 main.go:141] libmachine: Using API Version  1
	I1216 21:04:58.674063   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:58.674421   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:58.674594   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37541
	I1216 21:04:58.674614   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetState
	I1216 21:04:58.674994   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:58.675686   60421 main.go:141] libmachine: Using API Version  1
	I1216 21:04:58.675699   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:58.676334   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:58.676480   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 21:04:58.676898   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:58.676931   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:58.679230   60421 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1216 21:04:58.680032   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33309
	I1216 21:04:58.680609   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:58.680754   60421 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1216 21:04:58.680772   60421 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1216 21:04:58.680794   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 21:04:58.681202   60421 main.go:141] libmachine: Using API Version  1
	I1216 21:04:58.681221   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:58.681610   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:58.681815   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetState
	I1216 21:04:58.683608   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 21:04:58.684266   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 21:04:58.684765   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 21:04:58.684793   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 21:04:58.684925   60421 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 21:04:56.783069   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:04:56.783323   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:04:58.684932   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 21:04:58.685156   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 21:04:58.685321   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 21:04:58.685515   60421 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 21:04:58.686360   60421 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 21:04:58.686379   60421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 21:04:58.686396   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 21:04:58.689909   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 21:04:58.690365   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 21:04:58.690392   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 21:04:58.690698   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 21:04:58.690927   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 21:04:58.691095   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 21:04:58.691305   60421 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 21:04:58.695899   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36017
	I1216 21:04:58.696274   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:58.696758   60421 main.go:141] libmachine: Using API Version  1
	I1216 21:04:58.696777   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:58.697064   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:58.697225   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetState
	I1216 21:04:58.698530   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 21:04:58.698751   60421 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 21:04:58.698766   60421 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 21:04:58.698784   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 21:04:58.701986   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 21:04:58.702420   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 21:04:58.702473   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 21:04:58.702655   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 21:04:58.702839   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 21:04:58.702979   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 21:04:58.703197   60421 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 21:04:58.866115   60421 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 21:04:58.892287   60421 node_ready.go:35] waiting up to 6m0s for node "no-preload-232338" to be "Ready" ...
	I1216 21:04:58.949580   60421 node_ready.go:49] node "no-preload-232338" has status "Ready":"True"
	I1216 21:04:58.949610   60421 node_ready.go:38] duration metric: took 57.274849ms for node "no-preload-232338" to be "Ready" ...
	I1216 21:04:58.949622   60421 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:04:58.983955   60421 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-4wwvd" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:59.036124   60421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 21:04:59.039113   60421 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1216 21:04:59.039139   60421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1216 21:04:59.087493   60421 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1216 21:04:59.087531   60421 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1216 21:04:59.094976   60421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 21:04:59.129816   60421 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 21:04:59.129840   60421 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1216 21:04:59.236390   60421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 21:05:00.157688   60421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.121522553s)
	I1216 21:05:00.157736   60421 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:00.157751   60421 main.go:141] libmachine: (no-preload-232338) Calling .Close
	I1216 21:05:00.157764   60421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.06274536s)
	I1216 21:05:00.157830   60421 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:00.157851   60421 main.go:141] libmachine: (no-preload-232338) Calling .Close
	I1216 21:05:00.158259   60421 main.go:141] libmachine: (no-preload-232338) DBG | Closing plugin on server side
	I1216 21:05:00.158270   60421 main.go:141] libmachine: (no-preload-232338) DBG | Closing plugin on server side
	I1216 21:05:00.158282   60421 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:00.158288   60421 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:00.158297   60421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:00.158314   60421 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:00.158327   60421 main.go:141] libmachine: (no-preload-232338) Calling .Close
	I1216 21:05:00.158319   60421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:00.158344   60421 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:00.158352   60421 main.go:141] libmachine: (no-preload-232338) Calling .Close
	I1216 21:05:00.158589   60421 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:00.158604   60421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:00.158624   60421 main.go:141] libmachine: (no-preload-232338) DBG | Closing plugin on server side
	I1216 21:05:00.158589   60421 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:00.158655   60421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:00.182819   60421 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:00.182844   60421 main.go:141] libmachine: (no-preload-232338) Calling .Close
	I1216 21:05:00.183229   60421 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:00.183271   60421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:00.679810   60421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.44337328s)
	I1216 21:05:00.679867   60421 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:00.679880   60421 main.go:141] libmachine: (no-preload-232338) Calling .Close
	I1216 21:05:00.680233   60421 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:00.680254   60421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:00.680266   60421 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:00.680274   60421 main.go:141] libmachine: (no-preload-232338) Calling .Close
	I1216 21:05:00.680612   60421 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:00.680632   60421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:00.680643   60421 addons.go:475] Verifying addon metrics-server=true in "no-preload-232338"
	I1216 21:05:00.680659   60421 main.go:141] libmachine: (no-preload-232338) DBG | Closing plugin on server side
	I1216 21:05:00.682400   60421 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1216 21:05:00.684226   60421 addons.go:510] duration metric: took 2.048395371s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1216 21:05:00.997599   60421 pod_ready.go:103] pod "coredns-668d6bf9bc-4wwvd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:05:01.990706   60421 pod_ready.go:93] pod "coredns-668d6bf9bc-4wwvd" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:01.990733   60421 pod_ready.go:82] duration metric: took 3.006750411s for pod "coredns-668d6bf9bc-4wwvd" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:01.990742   60421 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-c4qfj" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:03.998055   60421 pod_ready.go:103] pod "coredns-668d6bf9bc-c4qfj" in "kube-system" namespace has status "Ready":"False"
	I1216 21:05:05.997310   60421 pod_ready.go:93] pod "coredns-668d6bf9bc-c4qfj" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:05.997334   60421 pod_ready.go:82] duration metric: took 4.006586503s for pod "coredns-668d6bf9bc-c4qfj" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:05.997346   60421 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.002576   60421 pod_ready.go:93] pod "etcd-no-preload-232338" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:06.002597   60421 pod_ready.go:82] duration metric: took 5.244238ms for pod "etcd-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.002607   60421 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.007407   60421 pod_ready.go:93] pod "kube-apiserver-no-preload-232338" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:06.007435   60421 pod_ready.go:82] duration metric: took 4.820838ms for pod "kube-apiserver-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.007449   60421 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.012239   60421 pod_ready.go:93] pod "kube-controller-manager-no-preload-232338" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:06.012263   60421 pod_ready.go:82] duration metric: took 4.806874ms for pod "kube-controller-manager-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.012273   60421 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-m5hq8" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.017087   60421 pod_ready.go:93] pod "kube-proxy-m5hq8" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:06.017111   60421 pod_ready.go:82] duration metric: took 4.830348ms for pod "kube-proxy-m5hq8" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.017124   60421 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.393947   60421 pod_ready.go:93] pod "kube-scheduler-no-preload-232338" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:06.393978   60421 pod_ready.go:82] duration metric: took 376.845934ms for pod "kube-scheduler-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.393989   60421 pod_ready.go:39] duration metric: took 7.444356073s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:05:06.394008   60421 api_server.go:52] waiting for apiserver process to appear ...
	I1216 21:05:06.394074   60421 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:05:06.410287   60421 api_server.go:72] duration metric: took 7.774519412s to wait for apiserver process to appear ...
	I1216 21:05:06.410327   60421 api_server.go:88] waiting for apiserver healthz status ...
	I1216 21:05:06.410363   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:05:06.415344   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 200:
	ok
	I1216 21:05:06.416302   60421 api_server.go:141] control plane version: v1.32.0
	I1216 21:05:06.416324   60421 api_server.go:131] duration metric: took 5.989768ms to wait for apiserver health ...
	I1216 21:05:06.416333   60421 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 21:05:06.598174   60421 system_pods.go:59] 9 kube-system pods found
	I1216 21:05:06.598205   60421 system_pods.go:61] "coredns-668d6bf9bc-4wwvd" [1c63ab10-dfdd-4aca-b39f-bc9b0e028e5e] Running
	I1216 21:05:06.598210   60421 system_pods.go:61] "coredns-668d6bf9bc-c4qfj" [b9bf3125-1e6d-4794-a2e6-2ff7ed5132b1] Running
	I1216 21:05:06.598214   60421 system_pods.go:61] "etcd-no-preload-232338" [5318f756-4c64-46be-b71b-94d53f48f0e9] Running
	I1216 21:05:06.598218   60421 system_pods.go:61] "kube-apiserver-no-preload-232338" [8d8fa68c-80ab-4747-a2ce-eeaff8847c29] Running
	I1216 21:05:06.598222   60421 system_pods.go:61] "kube-controller-manager-no-preload-232338" [8626806c-cd3f-488c-95c3-4b909878c1e4] Running
	I1216 21:05:06.598224   60421 system_pods.go:61] "kube-proxy-m5hq8" [ca0d357a-dda2-4508-a954-5c67eaf5b8ac] Running
	I1216 21:05:06.598229   60421 system_pods.go:61] "kube-scheduler-no-preload-232338" [8944107e-9e5c-474b-a0c1-9461e797a131] Running
	I1216 21:05:06.598236   60421 system_pods.go:61] "metrics-server-f79f97bbb-l7dcr" [fabafb40-1cb8-427b-88a6-37eeb6fd5b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:05:06.598240   60421 system_pods.go:61] "storage-provisioner" [3b742666-dfd4-4c9b-95a9-25367ec2a718] Running
	I1216 21:05:06.598248   60421 system_pods.go:74] duration metric: took 181.908567ms to wait for pod list to return data ...
	I1216 21:05:06.598255   60421 default_sa.go:34] waiting for default service account to be created ...
	I1216 21:05:06.794774   60421 default_sa.go:45] found service account: "default"
	I1216 21:05:06.794805   60421 default_sa.go:55] duration metric: took 196.542698ms for default service account to be created ...
	I1216 21:05:06.794823   60421 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 21:05:06.998297   60421 system_pods.go:86] 9 kube-system pods found
	I1216 21:05:06.998332   60421 system_pods.go:89] "coredns-668d6bf9bc-4wwvd" [1c63ab10-dfdd-4aca-b39f-bc9b0e028e5e] Running
	I1216 21:05:06.998341   60421 system_pods.go:89] "coredns-668d6bf9bc-c4qfj" [b9bf3125-1e6d-4794-a2e6-2ff7ed5132b1] Running
	I1216 21:05:06.998348   60421 system_pods.go:89] "etcd-no-preload-232338" [5318f756-4c64-46be-b71b-94d53f48f0e9] Running
	I1216 21:05:06.998354   60421 system_pods.go:89] "kube-apiserver-no-preload-232338" [8d8fa68c-80ab-4747-a2ce-eeaff8847c29] Running
	I1216 21:05:06.998359   60421 system_pods.go:89] "kube-controller-manager-no-preload-232338" [8626806c-cd3f-488c-95c3-4b909878c1e4] Running
	I1216 21:05:06.998364   60421 system_pods.go:89] "kube-proxy-m5hq8" [ca0d357a-dda2-4508-a954-5c67eaf5b8ac] Running
	I1216 21:05:06.998369   60421 system_pods.go:89] "kube-scheduler-no-preload-232338" [8944107e-9e5c-474b-a0c1-9461e797a131] Running
	I1216 21:05:06.998378   60421 system_pods.go:89] "metrics-server-f79f97bbb-l7dcr" [fabafb40-1cb8-427b-88a6-37eeb6fd5b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:05:06.998385   60421 system_pods.go:89] "storage-provisioner" [3b742666-dfd4-4c9b-95a9-25367ec2a718] Running
	I1216 21:05:06.998397   60421 system_pods.go:126] duration metric: took 203.564807ms to wait for k8s-apps to be running ...
	I1216 21:05:06.998407   60421 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 21:05:06.998457   60421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:05:07.014979   60421 system_svc.go:56] duration metric: took 16.561363ms WaitForService to wait for kubelet
	I1216 21:05:07.015013   60421 kubeadm.go:582] duration metric: took 8.379260538s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 21:05:07.015029   60421 node_conditions.go:102] verifying NodePressure condition ...
	I1216 21:05:07.195470   60421 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 21:05:07.195504   60421 node_conditions.go:123] node cpu capacity is 2
	I1216 21:05:07.195516   60421 node_conditions.go:105] duration metric: took 180.480949ms to run NodePressure ...
	I1216 21:05:07.195530   60421 start.go:241] waiting for startup goroutines ...
	I1216 21:05:07.195541   60421 start.go:246] waiting for cluster config update ...
	I1216 21:05:07.195554   60421 start.go:255] writing updated cluster config ...
	I1216 21:05:07.195857   60421 ssh_runner.go:195] Run: rm -f paused
	I1216 21:05:07.244442   60421 start.go:600] kubectl: 1.32.0, cluster: 1.32.0 (minor skew: 0)
	I1216 21:05:07.246788   60421 out.go:177] * Done! kubectl is now configured to use "no-preload-232338" cluster and "default" namespace by default
	I1216 21:05:06.784032   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:05:06.784224   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:05:13.066274   60215 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.635155592s)
	I1216 21:05:13.066379   60215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:05:13.096145   60215 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 21:05:13.109211   60215 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:05:13.125828   60215 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:05:13.125859   60215 kubeadm.go:157] found existing configuration files:
	
	I1216 21:05:13.125914   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 21:05:13.146982   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:05:13.147053   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:05:13.159382   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 21:05:13.176492   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:05:13.176572   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:05:13.190933   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 21:05:13.213230   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:05:13.213301   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:05:13.224631   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 21:05:13.234914   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:05:13.234975   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:05:13.245513   60215 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 21:05:13.300399   60215 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I1216 21:05:13.300491   60215 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 21:05:13.424114   60215 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 21:05:13.424252   60215 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 21:05:13.424372   60215 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 21:05:13.434507   60215 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 21:05:13.436710   60215 out.go:235]   - Generating certificates and keys ...
	I1216 21:05:13.436825   60215 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 21:05:13.436985   60215 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 21:05:13.437127   60215 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 21:05:13.437215   60215 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 21:05:13.437317   60215 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 21:05:13.437404   60215 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 21:05:13.437822   60215 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 21:05:13.438183   60215 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 21:05:13.438724   60215 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 21:05:13.439096   60215 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 21:05:13.439334   60215 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 21:05:13.439399   60215 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 21:05:13.528853   60215 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 21:05:13.700795   60215 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 21:05:13.890142   60215 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 21:05:14.166151   60215 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 21:05:14.310513   60215 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 21:05:14.311121   60215 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 21:05:14.317114   60215 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 21:05:14.319080   60215 out.go:235]   - Booting up control plane ...
	I1216 21:05:14.319218   60215 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 21:05:14.319332   60215 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 21:05:14.319518   60215 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 21:05:14.340394   60215 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 21:05:14.348443   60215 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 21:05:14.348533   60215 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 21:05:14.493244   60215 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 21:05:14.493456   60215 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 21:05:14.995210   60215 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.042805ms
	I1216 21:05:14.995325   60215 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1216 21:05:20.000911   60215 kubeadm.go:310] [api-check] The API server is healthy after 5.002773967s
	I1216 21:05:20.019851   60215 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 21:05:20.037375   60215 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 21:05:20.074003   60215 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 21:05:20.074237   60215 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-606219 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 21:05:20.087136   60215 kubeadm.go:310] [bootstrap-token] Using token: wev02f.lvhctqt9pq1agi1c
	I1216 21:05:20.088742   60215 out.go:235]   - Configuring RBAC rules ...
	I1216 21:05:20.088893   60215 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 21:05:20.094114   60215 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 21:05:20.101979   60215 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 21:05:20.105419   60215 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 21:05:20.112443   60215 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 21:05:20.116045   60215 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 21:05:20.406790   60215 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 21:05:20.844101   60215 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1216 21:05:21.414298   60215 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1216 21:05:21.414397   60215 kubeadm.go:310] 
	I1216 21:05:21.414488   60215 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1216 21:05:21.414504   60215 kubeadm.go:310] 
	I1216 21:05:21.414636   60215 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1216 21:05:21.414655   60215 kubeadm.go:310] 
	I1216 21:05:21.414694   60215 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1216 21:05:21.414796   60215 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 21:05:21.414866   60215 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 21:05:21.414877   60215 kubeadm.go:310] 
	I1216 21:05:21.414978   60215 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1216 21:05:21.415004   60215 kubeadm.go:310] 
	I1216 21:05:21.415071   60215 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 21:05:21.415080   60215 kubeadm.go:310] 
	I1216 21:05:21.415147   60215 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1216 21:05:21.415314   60215 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 21:05:21.415424   60215 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 21:05:21.415444   60215 kubeadm.go:310] 
	I1216 21:05:21.415568   60215 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 21:05:21.415674   60215 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1216 21:05:21.415690   60215 kubeadm.go:310] 
	I1216 21:05:21.415837   60215 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wev02f.lvhctqt9pq1agi1c \
	I1216 21:05:21.415982   60215 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e03b60b144334bf383a3d22daeca854a6b4004373f1847ba3afcb85a998b5735 \
	I1216 21:05:21.416023   60215 kubeadm.go:310] 	--control-plane 
	I1216 21:05:21.416033   60215 kubeadm.go:310] 
	I1216 21:05:21.416152   60215 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1216 21:05:21.416165   60215 kubeadm.go:310] 
	I1216 21:05:21.416295   60215 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wev02f.lvhctqt9pq1agi1c \
	I1216 21:05:21.416452   60215 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e03b60b144334bf383a3d22daeca854a6b4004373f1847ba3afcb85a998b5735 
	I1216 21:05:21.417157   60215 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 21:05:21.417251   60215 cni.go:84] Creating CNI manager for ""
	I1216 21:05:21.417265   60215 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:05:21.418899   60215 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 21:05:21.420240   60215 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 21:05:21.438639   60215 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 21:05:21.470443   60215 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 21:05:21.470525   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:21.470552   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-606219 minikube.k8s.io/updated_at=2024_12_16T21_05_21_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=74e51ab701402ddc00f8ba70f2a2775c7dcd6477 minikube.k8s.io/name=embed-certs-606219 minikube.k8s.io/primary=true
	I1216 21:05:21.721162   60215 ops.go:34] apiserver oom_adj: -16
	I1216 21:05:21.721292   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:22.221634   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:22.722431   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:23.221436   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:23.721948   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:24.222009   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:24.722203   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:24.835684   60215 kubeadm.go:1113] duration metric: took 3.36522517s to wait for elevateKubeSystemPrivileges
	I1216 21:05:24.835729   60215 kubeadm.go:394] duration metric: took 5m0.316036708s to StartCluster
	I1216 21:05:24.835751   60215 settings.go:142] acquiring lock: {Name:mke62e1d1fa6bfae09410847a3fc6f95d0bbbd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:05:24.835847   60215 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 21:05:24.838279   60215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/kubeconfig: {Name:mk67073c6dc9abd712825d4490d6430745897f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:05:24.838580   60215 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.151 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 21:05:24.838625   60215 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 21:05:24.838747   60215 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-606219"
	I1216 21:05:24.838768   60215 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-606219"
	W1216 21:05:24.838789   60215 addons.go:243] addon storage-provisioner should already be in state true
	I1216 21:05:24.838816   60215 config.go:182] Loaded profile config "embed-certs-606219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 21:05:24.838825   60215 addons.go:69] Setting default-storageclass=true in profile "embed-certs-606219"
	I1216 21:05:24.838832   60215 addons.go:69] Setting metrics-server=true in profile "embed-certs-606219"
	I1216 21:05:24.838846   60215 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-606219"
	I1216 21:05:24.838822   60215 host.go:66] Checking if "embed-certs-606219" exists ...
	I1216 21:05:24.838848   60215 addons.go:234] Setting addon metrics-server=true in "embed-certs-606219"
	W1216 21:05:24.838945   60215 addons.go:243] addon metrics-server should already be in state true
	I1216 21:05:24.838965   60215 host.go:66] Checking if "embed-certs-606219" exists ...
	I1216 21:05:24.839285   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:05:24.839292   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:05:24.839331   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:05:24.839364   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:05:24.839415   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:05:24.839496   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:05:24.843833   60215 out.go:177] * Verifying Kubernetes components...
	I1216 21:05:24.845341   60215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 21:05:24.857648   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39513
	I1216 21:05:24.858457   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:05:24.859021   60215 main.go:141] libmachine: Using API Version  1
	I1216 21:05:24.859037   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:05:24.861356   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36663
	I1216 21:05:24.861406   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44685
	I1216 21:05:24.861357   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:05:24.861844   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:05:24.862150   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:05:24.862188   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:05:24.862315   60215 main.go:141] libmachine: Using API Version  1
	I1216 21:05:24.862334   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:05:24.862334   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:05:24.862661   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:05:24.862876   60215 main.go:141] libmachine: Using API Version  1
	I1216 21:05:24.862894   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:05:24.863171   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:05:24.863200   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:05:24.863634   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:05:24.863964   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetState
	I1216 21:05:24.867371   60215 addons.go:234] Setting addon default-storageclass=true in "embed-certs-606219"
	W1216 21:05:24.867392   60215 addons.go:243] addon default-storageclass should already be in state true
	I1216 21:05:24.867419   60215 host.go:66] Checking if "embed-certs-606219" exists ...
	I1216 21:05:24.867758   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:05:24.867801   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:05:24.884243   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35999
	I1216 21:05:24.884680   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:05:24.885282   60215 main.go:141] libmachine: Using API Version  1
	I1216 21:05:24.885304   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:05:24.885380   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36799
	I1216 21:05:24.885657   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:05:24.885730   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:05:24.885934   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetState
	I1216 21:05:24.886191   60215 main.go:141] libmachine: Using API Version  1
	I1216 21:05:24.886202   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:05:24.886473   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:05:24.886831   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:05:24.886853   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:05:24.887935   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:05:24.890092   60215 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1216 21:05:24.891395   60215 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1216 21:05:24.891413   60215 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1216 21:05:24.891441   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:05:24.894367   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46739
	I1216 21:05:24.894926   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:05:24.895551   60215 main.go:141] libmachine: Using API Version  1
	I1216 21:05:24.895570   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:05:24.895832   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:05:24.896148   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:05:24.896382   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetState
	I1216 21:05:24.896501   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:05:24.896523   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:05:24.897136   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:05:24.897327   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:05:24.897507   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:05:24.897673   60215 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 21:05:24.898101   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:05:24.900061   60215 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 21:05:24.901390   60215 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 21:05:24.901412   60215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 21:05:24.901432   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:05:24.904063   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:05:24.904403   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:05:24.904421   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:05:24.904617   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:05:24.904828   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:05:24.904969   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:05:24.905117   60215 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 21:05:24.907518   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32915
	I1216 21:05:24.907890   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:05:24.908349   60215 main.go:141] libmachine: Using API Version  1
	I1216 21:05:24.908362   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:05:24.908615   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:05:24.908793   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetState
	I1216 21:05:24.910349   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:05:24.910557   60215 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 21:05:24.910590   60215 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 21:05:24.910623   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:05:24.913163   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:05:24.913546   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:05:24.913628   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:05:24.913971   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:05:24.914156   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:05:24.914402   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:05:24.914562   60215 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 21:05:25.054773   60215 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 21:05:25.077692   60215 node_ready.go:35] waiting up to 6m0s for node "embed-certs-606219" to be "Ready" ...
	I1216 21:05:25.085592   60215 node_ready.go:49] node "embed-certs-606219" has status "Ready":"True"
	I1216 21:05:25.085618   60215 node_ready.go:38] duration metric: took 7.893359ms for node "embed-certs-606219" to be "Ready" ...
	I1216 21:05:25.085630   60215 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:05:25.092073   60215 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:25.160890   60215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 21:05:25.171950   60215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 21:05:25.174517   60215 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1216 21:05:25.174540   60215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1216 21:05:25.201386   60215 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1216 21:05:25.201415   60215 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1216 21:05:25.279568   60215 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 21:05:25.279599   60215 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1216 21:05:25.316528   60215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 21:05:25.944495   60215 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:25.944521   60215 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:25.944529   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Close
	I1216 21:05:25.944533   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Close
	I1216 21:05:25.944816   60215 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:25.944835   60215 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:25.944845   60215 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:25.944855   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Close
	I1216 21:05:25.944855   60215 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:25.944869   60215 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:25.944876   60215 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:25.944888   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Close
	I1216 21:05:25.944817   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Closing plugin on server side
	I1216 21:05:25.945069   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Closing plugin on server side
	I1216 21:05:25.945131   60215 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:25.945147   60215 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:25.945168   60215 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:25.945173   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Closing plugin on server side
	I1216 21:05:25.945218   60215 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:25.961427   60215 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:25.961449   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Close
	I1216 21:05:25.961729   60215 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:25.961743   60215 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:26.745600   60215 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.429029698s)
	I1216 21:05:26.745665   60215 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:26.745678   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Close
	I1216 21:05:26.746097   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Closing plugin on server side
	I1216 21:05:26.746115   60215 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:26.746128   60215 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:26.746142   60215 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:26.746151   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Close
	I1216 21:05:26.746429   60215 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:26.746446   60215 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:26.746457   60215 addons.go:475] Verifying addon metrics-server=true in "embed-certs-606219"
	I1216 21:05:26.746480   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Closing plugin on server side
	I1216 21:05:26.748859   60215 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1216 21:05:26.785021   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:05:26.785309   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:05:26.750502   60215 addons.go:510] duration metric: took 1.911885721s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1216 21:05:27.124629   60215 pod_ready.go:103] pod "etcd-embed-certs-606219" in "kube-system" namespace has status "Ready":"False"
	I1216 21:05:28.100607   60215 pod_ready.go:93] pod "etcd-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:28.100642   60215 pod_ready.go:82] duration metric: took 3.008540123s for pod "etcd-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:28.100654   60215 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:28.107620   60215 pod_ready.go:93] pod "kube-apiserver-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:28.107649   60215 pod_ready.go:82] duration metric: took 6.986126ms for pod "kube-apiserver-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:28.107661   60215 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:30.114012   60215 pod_ready.go:103] pod "kube-controller-manager-embed-certs-606219" in "kube-system" namespace has status "Ready":"False"
	I1216 21:05:31.116704   60215 pod_ready.go:93] pod "kube-controller-manager-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:31.116738   60215 pod_ready.go:82] duration metric: took 3.009069732s for pod "kube-controller-manager-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:31.116752   60215 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:31.122043   60215 pod_ready.go:93] pod "kube-scheduler-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:31.122079   60215 pod_ready.go:82] duration metric: took 5.318248ms for pod "kube-scheduler-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:31.122089   60215 pod_ready.go:39] duration metric: took 6.036446164s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:05:31.122107   60215 api_server.go:52] waiting for apiserver process to appear ...
	I1216 21:05:31.122167   60215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:05:31.140854   60215 api_server.go:72] duration metric: took 6.302233923s to wait for apiserver process to appear ...
	I1216 21:05:31.140887   60215 api_server.go:88] waiting for apiserver healthz status ...
	I1216 21:05:31.140910   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:05:31.146080   60215 api_server.go:279] https://192.168.61.151:8443/healthz returned 200:
	ok
	I1216 21:05:31.147076   60215 api_server.go:141] control plane version: v1.32.0
	I1216 21:05:31.147107   60215 api_server.go:131] duration metric: took 6.2056ms to wait for apiserver health ...
	I1216 21:05:31.147115   60215 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 21:05:31.152598   60215 system_pods.go:59] 9 kube-system pods found
	I1216 21:05:31.152627   60215 system_pods.go:61] "coredns-668d6bf9bc-5c74p" [ef8e73b6-150f-47cc-9df9-dcf983e5bd6e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:05:31.152634   60215 system_pods.go:61] "coredns-668d6bf9bc-xhdlz" [c1b5b585-f005-4885-9809-60f60e03bf04] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:05:31.152640   60215 system_pods.go:61] "etcd-embed-certs-606219" [f5595ee4-23f3-4227-8e25-8679fd2dc722] Running
	I1216 21:05:31.152643   60215 system_pods.go:61] "kube-apiserver-embed-certs-606219" [be11ba17-ecee-47c1-a4bd-329e0e705369] Running
	I1216 21:05:31.152647   60215 system_pods.go:61] "kube-controller-manager-embed-certs-606219" [21210597-d4d5-4cab-9a24-2d9f702f682d] Running
	I1216 21:05:31.152652   60215 system_pods.go:61] "kube-proxy-677x9" [37810520-4f02-46c4-8eeb-6dc70c859e3e] Running
	I1216 21:05:31.152655   60215 system_pods.go:61] "kube-scheduler-embed-certs-606219" [5a39f42d-b727-4acd-bd39-ae1c56a5b725] Running
	I1216 21:05:31.152659   60215 system_pods.go:61] "metrics-server-f79f97bbb-6fxnl" [828f2925-402c-4f49-89e1-354e082c0de4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:05:31.152662   60215 system_pods.go:61] "storage-provisioner" [6437bd61-690b-498d-b35c-e2ef4eb5be97] Running
	I1216 21:05:31.152669   60215 system_pods.go:74] duration metric: took 5.548798ms to wait for pod list to return data ...
	I1216 21:05:31.152675   60215 default_sa.go:34] waiting for default service account to be created ...
	I1216 21:05:31.155444   60215 default_sa.go:45] found service account: "default"
	I1216 21:05:31.155469   60215 default_sa.go:55] duration metric: took 2.788897ms for default service account to be created ...
	I1216 21:05:31.155477   60215 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 21:05:31.160520   60215 system_pods.go:86] 9 kube-system pods found
	I1216 21:05:31.160548   60215 system_pods.go:89] "coredns-668d6bf9bc-5c74p" [ef8e73b6-150f-47cc-9df9-dcf983e5bd6e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:05:31.160555   60215 system_pods.go:89] "coredns-668d6bf9bc-xhdlz" [c1b5b585-f005-4885-9809-60f60e03bf04] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:05:31.160561   60215 system_pods.go:89] "etcd-embed-certs-606219" [f5595ee4-23f3-4227-8e25-8679fd2dc722] Running
	I1216 21:05:31.160565   60215 system_pods.go:89] "kube-apiserver-embed-certs-606219" [be11ba17-ecee-47c1-a4bd-329e0e705369] Running
	I1216 21:05:31.160569   60215 system_pods.go:89] "kube-controller-manager-embed-certs-606219" [21210597-d4d5-4cab-9a24-2d9f702f682d] Running
	I1216 21:05:31.160573   60215 system_pods.go:89] "kube-proxy-677x9" [37810520-4f02-46c4-8eeb-6dc70c859e3e] Running
	I1216 21:05:31.160576   60215 system_pods.go:89] "kube-scheduler-embed-certs-606219" [5a39f42d-b727-4acd-bd39-ae1c56a5b725] Running
	I1216 21:05:31.160580   60215 system_pods.go:89] "metrics-server-f79f97bbb-6fxnl" [828f2925-402c-4f49-89e1-354e082c0de4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:05:31.160584   60215 system_pods.go:89] "storage-provisioner" [6437bd61-690b-498d-b35c-e2ef4eb5be97] Running
	I1216 21:05:31.160591   60215 system_pods.go:126] duration metric: took 5.109359ms to wait for k8s-apps to be running ...
	I1216 21:05:31.160597   60215 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 21:05:31.160637   60215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:05:31.177182   60215 system_svc.go:56] duration metric: took 16.575484ms WaitForService to wait for kubelet
	I1216 21:05:31.177216   60215 kubeadm.go:582] duration metric: took 6.33860089s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 21:05:31.177239   60215 node_conditions.go:102] verifying NodePressure condition ...
	I1216 21:05:31.180614   60215 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 21:05:31.180635   60215 node_conditions.go:123] node cpu capacity is 2
	I1216 21:05:31.180645   60215 node_conditions.go:105] duration metric: took 3.400617ms to run NodePressure ...
	I1216 21:05:31.180656   60215 start.go:241] waiting for startup goroutines ...
	I1216 21:05:31.180667   60215 start.go:246] waiting for cluster config update ...
	I1216 21:05:31.180684   60215 start.go:255] writing updated cluster config ...
	I1216 21:05:31.180960   60215 ssh_runner.go:195] Run: rm -f paused
	I1216 21:05:31.232404   60215 start.go:600] kubectl: 1.32.0, cluster: 1.32.0 (minor skew: 0)
	I1216 21:05:31.234366   60215 out.go:177] * Done! kubectl is now configured to use "embed-certs-606219" cluster and "default" namespace by default
	I1216 21:06:06.787417   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:06:06.787673   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:06:06.787700   60933 kubeadm.go:310] 
	I1216 21:06:06.787779   60933 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1216 21:06:06.787849   60933 kubeadm.go:310] 		timed out waiting for the condition
	I1216 21:06:06.787864   60933 kubeadm.go:310] 
	I1216 21:06:06.787894   60933 kubeadm.go:310] 	This error is likely caused by:
	I1216 21:06:06.787944   60933 kubeadm.go:310] 		- The kubelet is not running
	I1216 21:06:06.788115   60933 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 21:06:06.788131   60933 kubeadm.go:310] 
	I1216 21:06:06.788238   60933 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 21:06:06.788270   60933 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1216 21:06:06.788328   60933 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1216 21:06:06.788346   60933 kubeadm.go:310] 
	I1216 21:06:06.788492   60933 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1216 21:06:06.788568   60933 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1216 21:06:06.788575   60933 kubeadm.go:310] 
	I1216 21:06:06.788706   60933 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1216 21:06:06.788914   60933 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1216 21:06:06.789052   60933 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1216 21:06:06.789150   60933 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1216 21:06:06.789160   60933 kubeadm.go:310] 
	I1216 21:06:06.789970   60933 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 21:06:06.790084   60933 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1216 21:06:06.790222   60933 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1216 21:06:06.790376   60933 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 21:06:06.790430   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 21:06:07.272336   60933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:06:07.288881   60933 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:06:07.303411   60933 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:06:07.303437   60933 kubeadm.go:157] found existing configuration files:
	
	I1216 21:06:07.303486   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 21:06:07.314605   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:06:07.314675   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:06:07.326523   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 21:06:07.336506   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:06:07.336587   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:06:07.347505   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 21:06:07.357743   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:06:07.357799   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:06:07.368251   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 21:06:07.378296   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:06:07.378366   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:06:07.390625   60933 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 21:06:07.461800   60933 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1216 21:06:07.461911   60933 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 21:06:07.607467   60933 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 21:06:07.607664   60933 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 21:06:07.607821   60933 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1216 21:06:07.821429   60933 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 21:06:07.823617   60933 out.go:235]   - Generating certificates and keys ...
	I1216 21:06:07.823728   60933 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 21:06:07.823826   60933 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 21:06:07.823970   60933 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 21:06:07.824066   60933 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 21:06:07.824191   60933 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 21:06:07.824281   60933 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 21:06:07.824374   60933 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 21:06:07.824452   60933 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 21:06:07.824529   60933 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 21:06:07.824634   60933 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 21:06:07.824728   60933 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 21:06:07.824826   60933 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 21:06:08.070481   60933 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 21:06:08.416182   60933 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 21:06:08.472848   60933 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 21:06:08.528700   60933 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 21:06:08.551528   60933 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 21:06:08.552215   60933 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 21:06:08.552299   60933 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 21:06:08.702187   60933 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 21:06:08.704170   60933 out.go:235]   - Booting up control plane ...
	I1216 21:06:08.704286   60933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 21:06:08.721205   60933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 21:06:08.722619   60933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 21:06:08.724289   60933 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 21:06:08.726457   60933 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1216 21:06:48.729045   60933 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1216 21:06:48.729713   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:06:48.730028   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:06:53.730648   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:06:53.730870   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:07:03.731670   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:07:03.731904   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:07:23.733276   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:07:23.733489   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:08:03.734439   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:08:03.734730   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:08:03.734768   60933 kubeadm.go:310] 
	I1216 21:08:03.734831   60933 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1216 21:08:03.734902   60933 kubeadm.go:310] 		timed out waiting for the condition
	I1216 21:08:03.734917   60933 kubeadm.go:310] 
	I1216 21:08:03.734966   60933 kubeadm.go:310] 	This error is likely caused by:
	I1216 21:08:03.735003   60933 kubeadm.go:310] 		- The kubelet is not running
	I1216 21:08:03.735094   60933 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 21:08:03.735104   60933 kubeadm.go:310] 
	I1216 21:08:03.735260   60933 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 21:08:03.735325   60933 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1216 21:08:03.735353   60933 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1216 21:08:03.735359   60933 kubeadm.go:310] 
	I1216 21:08:03.735486   60933 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1216 21:08:03.735604   60933 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1216 21:08:03.735614   60933 kubeadm.go:310] 
	I1216 21:08:03.735757   60933 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1216 21:08:03.735880   60933 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1216 21:08:03.735986   60933 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1216 21:08:03.736096   60933 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1216 21:08:03.736107   60933 kubeadm.go:310] 
	I1216 21:08:03.736944   60933 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 21:08:03.737145   60933 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1216 21:08:03.737211   60933 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1216 21:08:03.737287   60933 kubeadm.go:394] duration metric: took 7m57.891196073s to StartCluster
	I1216 21:08:03.737346   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:08:03.737417   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:08:03.789377   60933 cri.go:89] found id: ""
	I1216 21:08:03.789412   60933 logs.go:282] 0 containers: []
	W1216 21:08:03.789421   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:08:03.789426   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:08:03.789477   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:08:03.831122   60933 cri.go:89] found id: ""
	I1216 21:08:03.831150   60933 logs.go:282] 0 containers: []
	W1216 21:08:03.831161   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:08:03.831167   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:08:03.831236   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:08:03.870598   60933 cri.go:89] found id: ""
	I1216 21:08:03.870625   60933 logs.go:282] 0 containers: []
	W1216 21:08:03.870634   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:08:03.870640   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:08:03.870695   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:08:03.909060   60933 cri.go:89] found id: ""
	I1216 21:08:03.909095   60933 logs.go:282] 0 containers: []
	W1216 21:08:03.909103   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:08:03.909109   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:08:03.909163   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:08:03.946925   60933 cri.go:89] found id: ""
	I1216 21:08:03.946954   60933 logs.go:282] 0 containers: []
	W1216 21:08:03.946962   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:08:03.946968   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:08:03.947038   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:08:03.985596   60933 cri.go:89] found id: ""
	I1216 21:08:03.985629   60933 logs.go:282] 0 containers: []
	W1216 21:08:03.985650   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:08:03.985670   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:08:03.985736   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:08:04.022504   60933 cri.go:89] found id: ""
	I1216 21:08:04.022530   60933 logs.go:282] 0 containers: []
	W1216 21:08:04.022538   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:08:04.022545   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:08:04.022608   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:08:04.075636   60933 cri.go:89] found id: ""
	I1216 21:08:04.075667   60933 logs.go:282] 0 containers: []
	W1216 21:08:04.075677   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:08:04.075688   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:08:04.075707   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:08:04.180622   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:08:04.180653   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:08:04.180671   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:08:04.308091   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:08:04.308146   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:08:04.353240   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:08:04.353294   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:08:04.407919   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:08:04.407955   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1216 21:08:04.423583   60933 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1216 21:08:04.423644   60933 out.go:270] * 
	W1216 21:08:04.423727   60933 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 21:08:04.423749   60933 out.go:270] * 
	W1216 21:08:04.424576   60933 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 21:08:04.428361   60933 out.go:201] 
	W1216 21:08:04.429839   60933 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 21:08:04.429919   60933 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 21:08:04.429958   60933 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 21:08:04.431619   60933 out.go:201] 
	
	
	==> CRI-O <==
	Dec 16 21:08:06 old-k8s-version-847766 crio[626]: time="2024-12-16 21:08:06.291239096Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383286291215576,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e89d0641-7223-4dfe-a281-c3461c5736ba name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:08:06 old-k8s-version-847766 crio[626]: time="2024-12-16 21:08:06.292247998Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f118ddab-c682-4477-90f3-6f957f910360 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:08:06 old-k8s-version-847766 crio[626]: time="2024-12-16 21:08:06.292322538Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f118ddab-c682-4477-90f3-6f957f910360 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:08:06 old-k8s-version-847766 crio[626]: time="2024-12-16 21:08:06.292363048Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f118ddab-c682-4477-90f3-6f957f910360 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:08:06 old-k8s-version-847766 crio[626]: time="2024-12-16 21:08:06.331107629Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9e3167c5-2687-4863-8d55-2c6897117135 name=/runtime.v1.RuntimeService/Version
	Dec 16 21:08:06 old-k8s-version-847766 crio[626]: time="2024-12-16 21:08:06.331188000Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9e3167c5-2687-4863-8d55-2c6897117135 name=/runtime.v1.RuntimeService/Version
	Dec 16 21:08:06 old-k8s-version-847766 crio[626]: time="2024-12-16 21:08:06.332418958Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2b101230-2432-4ca7-b34a-6a9e5425d87c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:08:06 old-k8s-version-847766 crio[626]: time="2024-12-16 21:08:06.332852895Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383286332827641,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2b101230-2432-4ca7-b34a-6a9e5425d87c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:08:06 old-k8s-version-847766 crio[626]: time="2024-12-16 21:08:06.333784103Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0ecd7130-6d05-42cf-812b-9a007b1a1ccd name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:08:06 old-k8s-version-847766 crio[626]: time="2024-12-16 21:08:06.333838414Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0ecd7130-6d05-42cf-812b-9a007b1a1ccd name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:08:06 old-k8s-version-847766 crio[626]: time="2024-12-16 21:08:06.333921249Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0ecd7130-6d05-42cf-812b-9a007b1a1ccd name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:08:06 old-k8s-version-847766 crio[626]: time="2024-12-16 21:08:06.370687228Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0c1b7f9e-b235-4e62-9be7-0dd6455cbb36 name=/runtime.v1.RuntimeService/Version
	Dec 16 21:08:06 old-k8s-version-847766 crio[626]: time="2024-12-16 21:08:06.370767989Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0c1b7f9e-b235-4e62-9be7-0dd6455cbb36 name=/runtime.v1.RuntimeService/Version
	Dec 16 21:08:06 old-k8s-version-847766 crio[626]: time="2024-12-16 21:08:06.372182804Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0454a874-212c-41e6-9a61-8674aa07d3ea name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:08:06 old-k8s-version-847766 crio[626]: time="2024-12-16 21:08:06.372641392Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383286372566793,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0454a874-212c-41e6-9a61-8674aa07d3ea name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:08:06 old-k8s-version-847766 crio[626]: time="2024-12-16 21:08:06.373341104Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bb1cc7a3-90ab-46ec-b1b1-25b08a17c04c name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:08:06 old-k8s-version-847766 crio[626]: time="2024-12-16 21:08:06.373419823Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bb1cc7a3-90ab-46ec-b1b1-25b08a17c04c name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:08:06 old-k8s-version-847766 crio[626]: time="2024-12-16 21:08:06.373455433Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=bb1cc7a3-90ab-46ec-b1b1-25b08a17c04c name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:08:06 old-k8s-version-847766 crio[626]: time="2024-12-16 21:08:06.412841797Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e81d10c8-e568-4dbf-8b60-65bbb522cc27 name=/runtime.v1.RuntimeService/Version
	Dec 16 21:08:06 old-k8s-version-847766 crio[626]: time="2024-12-16 21:08:06.412937938Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e81d10c8-e568-4dbf-8b60-65bbb522cc27 name=/runtime.v1.RuntimeService/Version
	Dec 16 21:08:06 old-k8s-version-847766 crio[626]: time="2024-12-16 21:08:06.414495901Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=25d86c3c-5499-4107-8799-6b4369fe8bfd name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:08:06 old-k8s-version-847766 crio[626]: time="2024-12-16 21:08:06.414937157Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383286414911150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=25d86c3c-5499-4107-8799-6b4369fe8bfd name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:08:06 old-k8s-version-847766 crio[626]: time="2024-12-16 21:08:06.415353307Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1a4a63c6-0ee0-4c5c-a6ef-6f150488cfb9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:08:06 old-k8s-version-847766 crio[626]: time="2024-12-16 21:08:06.415433074Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1a4a63c6-0ee0-4c5c-a6ef-6f150488cfb9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:08:06 old-k8s-version-847766 crio[626]: time="2024-12-16 21:08:06.415468399Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1a4a63c6-0ee0-4c5c-a6ef-6f150488cfb9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 20:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053004] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042792] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.137612] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.914611] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.669532] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.090745] systemd-fstab-generator[555]: Ignoring "noauto" option for root device
	[  +0.063238] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068057] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.211871] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.132194] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.273053] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[Dec16 21:00] systemd-fstab-generator[876]: Ignoring "noauto" option for root device
	[  +0.063116] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.314286] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[ +12.945441] kauditd_printk_skb: 46 callbacks suppressed
	[Dec16 21:04] systemd-fstab-generator[4991]: Ignoring "noauto" option for root device
	[Dec16 21:06] systemd-fstab-generator[5267]: Ignoring "noauto" option for root device
	[  +0.075796] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 21:08:06 up 8 min,  0 users,  load average: 0.10, 0.15, 0.08
	Linux old-k8s-version-847766 5.10.207 #1 SMP Thu Dec 12 23:38:00 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Dec 16 21:08:03 old-k8s-version-847766 kubelet[5447]: net/http.(*Transport).dial(0xc0005b9680, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000a9c6c0, 0x24, 0x0, 0x0, 0x0, ...)
	Dec 16 21:08:03 old-k8s-version-847766 kubelet[5447]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Dec 16 21:08:03 old-k8s-version-847766 kubelet[5447]: net/http.(*Transport).dialConn(0xc0005b9680, 0x4f7fe00, 0xc000052030, 0x0, 0xc00056e480, 0x5, 0xc000a9c6c0, 0x24, 0x0, 0xc00051c6c0, ...)
	Dec 16 21:08:03 old-k8s-version-847766 kubelet[5447]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Dec 16 21:08:03 old-k8s-version-847766 kubelet[5447]: net/http.(*Transport).dialConnFor(0xc0005b9680, 0xc0008e82c0)
	Dec 16 21:08:03 old-k8s-version-847766 kubelet[5447]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Dec 16 21:08:03 old-k8s-version-847766 kubelet[5447]: created by net/http.(*Transport).queueForDial
	Dec 16 21:08:03 old-k8s-version-847766 kubelet[5447]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Dec 16 21:08:03 old-k8s-version-847766 kubelet[5447]: goroutine 165 [runnable]:
	Dec 16 21:08:03 old-k8s-version-847766 kubelet[5447]: runtime.Gosched(...)
	Dec 16 21:08:03 old-k8s-version-847766 kubelet[5447]:         /usr/local/go/src/runtime/proc.go:271
	Dec 16 21:08:03 old-k8s-version-847766 kubelet[5447]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc0005328a0, 0x0, 0x0)
	Dec 16 21:08:03 old-k8s-version-847766 kubelet[5447]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:549 +0x1a5
	Dec 16 21:08:03 old-k8s-version-847766 kubelet[5447]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0007c0540)
	Dec 16 21:08:03 old-k8s-version-847766 kubelet[5447]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Dec 16 21:08:03 old-k8s-version-847766 kubelet[5447]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Dec 16 21:08:03 old-k8s-version-847766 kubelet[5447]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Dec 16 21:08:04 old-k8s-version-847766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Dec 16 21:08:04 old-k8s-version-847766 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 16 21:08:04 old-k8s-version-847766 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 16 21:08:04 old-k8s-version-847766 kubelet[5501]: I1216 21:08:04.266243    5501 server.go:416] Version: v1.20.0
	Dec 16 21:08:04 old-k8s-version-847766 kubelet[5501]: I1216 21:08:04.266557    5501 server.go:837] Client rotation is on, will bootstrap in background
	Dec 16 21:08:04 old-k8s-version-847766 kubelet[5501]: I1216 21:08:04.268661    5501 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Dec 16 21:08:04 old-k8s-version-847766 kubelet[5501]: I1216 21:08:04.269631    5501 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Dec 16 21:08:04 old-k8s-version-847766 kubelet[5501]: W1216 21:08:04.269646    5501 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-847766 -n old-k8s-version-847766
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-847766 -n old-k8s-version-847766: exit status 2 (245.257025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-847766" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (753.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-327790 -n default-k8s-diff-port-327790
start_stop_delete_test.go:272: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-12-16 21:13:57.072788048 +0000 UTC m=+5955.546435784
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-327790 -n default-k8s-diff-port-327790
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-327790 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-327790 logs -n 25: (2.127402593s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p stopped-upgrade-976873                              | stopped-upgrade-976873       | jenkins | v1.34.0 | 16 Dec 24 20:49 UTC | 16 Dec 24 20:50 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-560677                           | kubernetes-upgrade-560677    | jenkins | v1.34.0 | 16 Dec 24 20:50 UTC | 16 Dec 24 20:50 UTC |
	| start   | -p no-preload-232338                                   | no-preload-232338            | jenkins | v1.34.0 | 16 Dec 24 20:50 UTC | 16 Dec 24 20:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-976873                              | stopped-upgrade-976873       | jenkins | v1.34.0 | 16 Dec 24 20:50 UTC | 16 Dec 24 20:50 UTC |
	| start   | -p embed-certs-606219                                  | embed-certs-606219           | jenkins | v1.34.0 | 16 Dec 24 20:50 UTC | 16 Dec 24 20:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-270954                              | cert-expiration-270954       | jenkins | v1.34.0 | 16 Dec 24 20:51 UTC | 16 Dec 24 20:51 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-606219            | embed-certs-606219           | jenkins | v1.34.0 | 16 Dec 24 20:51 UTC | 16 Dec 24 20:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-606219                                  | embed-certs-606219           | jenkins | v1.34.0 | 16 Dec 24 20:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-270954                              | cert-expiration-270954       | jenkins | v1.34.0 | 16 Dec 24 20:51 UTC | 16 Dec 24 20:51 UTC |
	| delete  | -p                                                     | disable-driver-mounts-384008 | jenkins | v1.34.0 | 16 Dec 24 20:51 UTC | 16 Dec 24 20:51 UTC |
	|         | disable-driver-mounts-384008                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-327790 | jenkins | v1.34.0 | 16 Dec 24 20:51 UTC | 16 Dec 24 20:52 UTC |
	|         | default-k8s-diff-port-327790                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-232338             | no-preload-232338            | jenkins | v1.34.0 | 16 Dec 24 20:52 UTC | 16 Dec 24 20:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-232338                                   | no-preload-232338            | jenkins | v1.34.0 | 16 Dec 24 20:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-327790  | default-k8s-diff-port-327790 | jenkins | v1.34.0 | 16 Dec 24 20:52 UTC | 16 Dec 24 20:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-327790 | jenkins | v1.34.0 | 16 Dec 24 20:52 UTC |                     |
	|         | default-k8s-diff-port-327790                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-847766        | old-k8s-version-847766       | jenkins | v1.34.0 | 16 Dec 24 20:53 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-606219                 | embed-certs-606219           | jenkins | v1.34.0 | 16 Dec 24 20:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-606219                                  | embed-certs-606219           | jenkins | v1.34.0 | 16 Dec 24 20:54 UTC | 16 Dec 24 21:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-232338                  | no-preload-232338            | jenkins | v1.34.0 | 16 Dec 24 20:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-232338                                   | no-preload-232338            | jenkins | v1.34.0 | 16 Dec 24 20:54 UTC | 16 Dec 24 21:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-327790       | default-k8s-diff-port-327790 | jenkins | v1.34.0 | 16 Dec 24 20:55 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-847766                              | old-k8s-version-847766       | jenkins | v1.34.0 | 16 Dec 24 20:55 UTC | 16 Dec 24 20:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-327790 | jenkins | v1.34.0 | 16 Dec 24 20:55 UTC | 16 Dec 24 21:04 UTC |
	|         | default-k8s-diff-port-327790                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-847766             | old-k8s-version-847766       | jenkins | v1.34.0 | 16 Dec 24 20:55 UTC | 16 Dec 24 20:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-847766                              | old-k8s-version-847766       | jenkins | v1.34.0 | 16 Dec 24 20:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 20:55:34
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 20:55:34.390724   60933 out.go:345] Setting OutFile to fd 1 ...
	I1216 20:55:34.390973   60933 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:55:34.390982   60933 out.go:358] Setting ErrFile to fd 2...
	I1216 20:55:34.390986   60933 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:55:34.391166   60933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-7083/.minikube/bin
	I1216 20:55:34.391763   60933 out.go:352] Setting JSON to false
	I1216 20:55:34.392611   60933 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5879,"bootTime":1734376655,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 20:55:34.392675   60933 start.go:139] virtualization: kvm guest
	I1216 20:55:34.394822   60933 out.go:177] * [old-k8s-version-847766] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1216 20:55:34.396184   60933 notify.go:220] Checking for updates...
	I1216 20:55:34.396201   60933 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 20:55:34.397724   60933 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 20:55:34.399130   60933 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 20:55:34.400470   60933 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-7083/.minikube
	I1216 20:55:34.401934   60933 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 20:55:34.403341   60933 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 20:55:34.405179   60933 config.go:182] Loaded profile config "old-k8s-version-847766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1216 20:55:34.405571   60933 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:55:34.405650   60933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:55:34.421052   60933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41215
	I1216 20:55:34.421523   60933 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:55:34.422018   60933 main.go:141] libmachine: Using API Version  1
	I1216 20:55:34.422035   60933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:55:34.422373   60933 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:55:34.422646   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:55:34.424565   60933 out.go:177] * Kubernetes 1.32.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.0
	I1216 20:55:34.426088   60933 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 20:55:34.426419   60933 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:55:34.426474   60933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:55:34.441375   60933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36915
	I1216 20:55:34.441833   60933 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:55:34.442327   60933 main.go:141] libmachine: Using API Version  1
	I1216 20:55:34.442349   60933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:55:34.442658   60933 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:55:34.442852   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:55:34.480512   60933 out.go:177] * Using the kvm2 driver based on existing profile
	I1216 20:55:34.481972   60933 start.go:297] selected driver: kvm2
	I1216 20:55:34.481988   60933 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-847766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-847766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 20:55:34.482125   60933 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 20:55:34.482826   60933 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 20:55:34.482907   60933 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20091-7083/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1216 20:55:34.498561   60933 install.go:137] /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1216 20:55:34.498953   60933 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 20:55:34.498981   60933 cni.go:84] Creating CNI manager for ""
	I1216 20:55:34.499022   60933 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 20:55:34.499060   60933 start.go:340] cluster config:
	{Name:old-k8s-version-847766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-847766 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 20:55:34.499164   60933 iso.go:125] acquiring lock: {Name:mk60ed2ba7ed00047edacd09f4f6bf84214f0831 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 20:55:34.501128   60933 out.go:177] * Starting "old-k8s-version-847766" primary control-plane node in "old-k8s-version-847766" cluster
	I1216 20:55:29.827520   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:55:32.899553   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:55:30.468027   60829 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 20:55:30.468071   60829 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1216 20:55:30.468079   60829 cache.go:56] Caching tarball of preloaded images
	I1216 20:55:30.468192   60829 preload.go:172] Found /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 20:55:30.468206   60829 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1216 20:55:30.468310   60829 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/config.json ...
	I1216 20:55:30.468540   60829 start.go:360] acquireMachinesLock for default-k8s-diff-port-327790: {Name:mk014ce1133f8d018fee1f78c9c31a354da6dd77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 20:55:34.502579   60933 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1216 20:55:34.502609   60933 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1216 20:55:34.502615   60933 cache.go:56] Caching tarball of preloaded images
	I1216 20:55:34.502716   60933 preload.go:172] Found /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 20:55:34.502731   60933 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1216 20:55:34.502823   60933 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/config.json ...
	I1216 20:55:34.503011   60933 start.go:360] acquireMachinesLock for old-k8s-version-847766: {Name:mk014ce1133f8d018fee1f78c9c31a354da6dd77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 20:55:38.979556   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:55:42.051532   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:55:48.131588   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:55:51.203568   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:55:57.283622   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:00.355490   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:06.435543   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:09.507559   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:15.587526   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:18.659657   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:24.739528   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:27.811498   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:33.891518   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:36.963554   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:43.043553   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:46.115578   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:52.195583   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:55.267507   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:01.347591   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:04.419562   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:10.499479   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:13.571540   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:19.651541   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:22.723545   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:28.803551   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:31.875527   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:37.955563   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:41.027520   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:47.107494   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:50.179566   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:56.259550   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:59.331540   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:05.411562   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:08.483592   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:14.563574   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:17.635522   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:23.715548   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:26.787559   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:32.867539   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:35.939502   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:42.019562   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:45.091545   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:51.171521   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:54.243542   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:57.248710   60421 start.go:364] duration metric: took 4m14.403979547s to acquireMachinesLock for "no-preload-232338"
	I1216 20:58:57.248796   60421 start.go:96] Skipping create...Using existing machine configuration
	I1216 20:58:57.248804   60421 fix.go:54] fixHost starting: 
	I1216 20:58:57.249232   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:58:57.249288   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:58:57.264905   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39773
	I1216 20:58:57.265423   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:58:57.265982   60421 main.go:141] libmachine: Using API Version  1
	I1216 20:58:57.266005   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:58:57.266396   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:58:57.266636   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:58:57.266807   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetState
	I1216 20:58:57.268705   60421 fix.go:112] recreateIfNeeded on no-preload-232338: state=Stopped err=<nil>
	I1216 20:58:57.268730   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	W1216 20:58:57.268918   60421 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 20:58:57.270855   60421 out.go:177] * Restarting existing kvm2 VM for "no-preload-232338" ...
	I1216 20:58:57.272142   60421 main.go:141] libmachine: (no-preload-232338) Calling .Start
	I1216 20:58:57.272374   60421 main.go:141] libmachine: (no-preload-232338) Ensuring networks are active...
	I1216 20:58:57.273245   60421 main.go:141] libmachine: (no-preload-232338) Ensuring network default is active
	I1216 20:58:57.273660   60421 main.go:141] libmachine: (no-preload-232338) Ensuring network mk-no-preload-232338 is active
	I1216 20:58:57.274025   60421 main.go:141] libmachine: (no-preload-232338) Getting domain xml...
	I1216 20:58:57.274673   60421 main.go:141] libmachine: (no-preload-232338) Creating domain...
	I1216 20:58:57.245632   60215 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 20:58:57.245753   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetMachineName
	I1216 20:58:57.246111   60215 buildroot.go:166] provisioning hostname "embed-certs-606219"
	I1216 20:58:57.246149   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetMachineName
	I1216 20:58:57.246399   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 20:58:57.248517   60215 machine.go:96] duration metric: took 4m37.414570479s to provisionDockerMachine
	I1216 20:58:57.248579   60215 fix.go:56] duration metric: took 4m37.437232743s for fixHost
	I1216 20:58:57.248587   60215 start.go:83] releasing machines lock for "embed-certs-606219", held for 4m37.437262865s
	W1216 20:58:57.248614   60215 start.go:714] error starting host: provision: host is not running
	W1216 20:58:57.248791   60215 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1216 20:58:57.248801   60215 start.go:729] Will try again in 5 seconds ...
	I1216 20:58:58.506521   60421 main.go:141] libmachine: (no-preload-232338) Waiting to get IP...
	I1216 20:58:58.507302   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:58:58.507627   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:58:58.507699   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:58:58.507613   61660 retry.go:31] will retry after 230.281045ms: waiting for machine to come up
	I1216 20:58:58.739343   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:58:58.739781   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:58:58.739804   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:58:58.739741   61660 retry.go:31] will retry after 323.962271ms: waiting for machine to come up
	I1216 20:58:59.065388   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:58:59.065856   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:58:59.065884   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:58:59.065816   61660 retry.go:31] will retry after 364.058481ms: waiting for machine to come up
	I1216 20:58:59.431290   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:58:59.431680   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:58:59.431707   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:58:59.431631   61660 retry.go:31] will retry after 569.845721ms: waiting for machine to come up
	I1216 20:59:00.003562   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:00.004030   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:00.004093   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:00.003988   61660 retry.go:31] will retry after 728.729909ms: waiting for machine to come up
	I1216 20:59:00.733954   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:00.734450   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:00.734482   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:00.734388   61660 retry.go:31] will retry after 679.479889ms: waiting for machine to come up
	I1216 20:59:01.415289   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:01.415739   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:01.415763   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:01.415690   61660 retry.go:31] will retry after 1.136560245s: waiting for machine to come up
	I1216 20:59:02.554094   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:02.554523   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:02.554548   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:02.554470   61660 retry.go:31] will retry after 1.299578742s: waiting for machine to come up
	I1216 20:59:02.250499   60215 start.go:360] acquireMachinesLock for embed-certs-606219: {Name:mk014ce1133f8d018fee1f78c9c31a354da6dd77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 20:59:03.855999   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:03.856366   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:03.856399   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:03.856300   61660 retry.go:31] will retry after 1.761269163s: waiting for machine to come up
	I1216 20:59:05.620383   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:05.620837   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:05.620858   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:05.620818   61660 retry.go:31] will retry after 2.100894301s: waiting for machine to come up
	I1216 20:59:07.723931   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:07.724300   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:07.724322   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:07.724273   61660 retry.go:31] will retry after 2.57501483s: waiting for machine to come up
	I1216 20:59:10.302185   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:10.302766   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:10.302802   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:10.302706   61660 retry.go:31] will retry after 2.68456895s: waiting for machine to come up
	I1216 20:59:17.060397   60829 start.go:364] duration metric: took 3m46.591813882s to acquireMachinesLock for "default-k8s-diff-port-327790"
	I1216 20:59:17.060456   60829 start.go:96] Skipping create...Using existing machine configuration
	I1216 20:59:17.060462   60829 fix.go:54] fixHost starting: 
	I1216 20:59:17.060878   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:59:17.060935   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:59:17.079226   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41365
	I1216 20:59:17.079715   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:59:17.080173   60829 main.go:141] libmachine: Using API Version  1
	I1216 20:59:17.080202   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:59:17.080554   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:59:17.080731   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:59:17.080873   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetState
	I1216 20:59:17.082368   60829 fix.go:112] recreateIfNeeded on default-k8s-diff-port-327790: state=Stopped err=<nil>
	I1216 20:59:17.082399   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	W1216 20:59:17.082570   60829 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 20:59:17.085104   60829 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-327790" ...
	I1216 20:59:12.988787   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:12.989140   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:12.989172   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:12.989098   61660 retry.go:31] will retry after 2.793178881s: waiting for machine to come up
	I1216 20:59:15.786011   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.786518   60421 main.go:141] libmachine: (no-preload-232338) Found IP for machine: 192.168.50.240
	I1216 20:59:15.786540   60421 main.go:141] libmachine: (no-preload-232338) Reserving static IP address...
	I1216 20:59:15.786564   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has current primary IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.786948   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "no-preload-232338", mac: "52:54:00:07:00:29", ip: "192.168.50.240"} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:15.786983   60421 main.go:141] libmachine: (no-preload-232338) DBG | skip adding static IP to network mk-no-preload-232338 - found existing host DHCP lease matching {name: "no-preload-232338", mac: "52:54:00:07:00:29", ip: "192.168.50.240"}
	I1216 20:59:15.786995   60421 main.go:141] libmachine: (no-preload-232338) Reserved static IP address: 192.168.50.240
	I1216 20:59:15.787009   60421 main.go:141] libmachine: (no-preload-232338) Waiting for SSH to be available...
	I1216 20:59:15.787022   60421 main.go:141] libmachine: (no-preload-232338) DBG | Getting to WaitForSSH function...
	I1216 20:59:15.789175   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.789509   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:15.789542   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.789633   60421 main.go:141] libmachine: (no-preload-232338) DBG | Using SSH client type: external
	I1216 20:59:15.789674   60421 main.go:141] libmachine: (no-preload-232338) DBG | Using SSH private key: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa (-rw-------)
	I1216 20:59:15.789709   60421 main.go:141] libmachine: (no-preload-232338) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1216 20:59:15.789718   60421 main.go:141] libmachine: (no-preload-232338) DBG | About to run SSH command:
	I1216 20:59:15.789726   60421 main.go:141] libmachine: (no-preload-232338) DBG | exit 0
	I1216 20:59:15.915980   60421 main.go:141] libmachine: (no-preload-232338) DBG | SSH cmd err, output: <nil>: 
	I1216 20:59:15.916473   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetConfigRaw
	I1216 20:59:15.917088   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetIP
	I1216 20:59:15.919782   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.920161   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:15.920192   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.920408   60421 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/config.json ...
	I1216 20:59:15.920636   60421 machine.go:93] provisionDockerMachine start ...
	I1216 20:59:15.920654   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:59:15.920875   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:15.923221   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.923623   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:15.923650   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.923784   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:15.923971   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:15.924107   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:15.924246   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:15.924395   60421 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:15.924715   60421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.240 22 <nil> <nil>}
	I1216 20:59:15.924729   60421 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 20:59:16.032079   60421 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1216 20:59:16.032108   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetMachineName
	I1216 20:59:16.032397   60421 buildroot.go:166] provisioning hostname "no-preload-232338"
	I1216 20:59:16.032423   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetMachineName
	I1216 20:59:16.032649   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:16.035467   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.035798   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.035826   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.036003   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:16.036184   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.036335   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.036494   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:16.036679   60421 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:16.036847   60421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.240 22 <nil> <nil>}
	I1216 20:59:16.036859   60421 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-232338 && echo "no-preload-232338" | sudo tee /etc/hostname
	I1216 20:59:16.161958   60421 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-232338
	
	I1216 20:59:16.161996   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:16.164585   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.165086   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.165130   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.165370   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:16.165578   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.165746   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.165877   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:16.166015   60421 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:16.166188   60421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.240 22 <nil> <nil>}
	I1216 20:59:16.166204   60421 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-232338' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-232338/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-232338' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 20:59:16.285329   60421 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 20:59:16.285374   60421 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20091-7083/.minikube CaCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20091-7083/.minikube}
	I1216 20:59:16.285407   60421 buildroot.go:174] setting up certificates
	I1216 20:59:16.285422   60421 provision.go:84] configureAuth start
	I1216 20:59:16.285432   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetMachineName
	I1216 20:59:16.285764   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetIP
	I1216 20:59:16.288773   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.289161   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.289192   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.289405   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:16.291687   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.292042   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.292072   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.292190   60421 provision.go:143] copyHostCerts
	I1216 20:59:16.292260   60421 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem, removing ...
	I1216 20:59:16.292274   60421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem
	I1216 20:59:16.292343   60421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem (1123 bytes)
	I1216 20:59:16.292470   60421 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem, removing ...
	I1216 20:59:16.292481   60421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem
	I1216 20:59:16.292508   60421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem (1679 bytes)
	I1216 20:59:16.292563   60421 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem, removing ...
	I1216 20:59:16.292570   60421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem
	I1216 20:59:16.292590   60421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem (1082 bytes)
	I1216 20:59:16.292649   60421 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem org=jenkins.no-preload-232338 san=[127.0.0.1 192.168.50.240 localhost minikube no-preload-232338]
	I1216 20:59:16.407096   60421 provision.go:177] copyRemoteCerts
	I1216 20:59:16.407187   60421 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 20:59:16.407227   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:16.410400   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.410725   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.410755   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.410977   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:16.411188   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.411437   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:16.411618   60421 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 20:59:16.498456   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 20:59:16.525297   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 20:59:16.551135   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 20:59:16.576040   60421 provision.go:87] duration metric: took 290.601941ms to configureAuth
	I1216 20:59:16.576074   60421 buildroot.go:189] setting minikube options for container-runtime
	I1216 20:59:16.576288   60421 config.go:182] Loaded profile config "no-preload-232338": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 20:59:16.576396   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:16.579169   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.579607   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.579641   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.579795   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:16.580016   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.580165   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.580311   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:16.580467   60421 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:16.580629   60421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.240 22 <nil> <nil>}
	I1216 20:59:16.580643   60421 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 20:59:16.816973   60421 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 20:59:16.816998   60421 machine.go:96] duration metric: took 896.349056ms to provisionDockerMachine
	I1216 20:59:16.817010   60421 start.go:293] postStartSetup for "no-preload-232338" (driver="kvm2")
	I1216 20:59:16.817030   60421 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 20:59:16.817044   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:59:16.817427   60421 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 20:59:16.817454   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:16.820182   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.820550   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.820578   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.820713   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:16.820914   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.821096   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:16.821274   60421 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 20:59:16.906513   60421 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 20:59:16.911314   60421 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 20:59:16.911346   60421 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/addons for local assets ...
	I1216 20:59:16.911482   60421 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/files for local assets ...
	I1216 20:59:16.911589   60421 filesync.go:149] local asset: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem -> 142542.pem in /etc/ssl/certs
	I1216 20:59:16.911720   60421 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 20:59:16.921890   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /etc/ssl/certs/142542.pem (1708 bytes)
	I1216 20:59:16.947114   60421 start.go:296] duration metric: took 130.089628ms for postStartSetup
	I1216 20:59:16.947192   60421 fix.go:56] duration metric: took 19.698385497s for fixHost
	I1216 20:59:16.947229   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:16.950156   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.950543   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.950575   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.950780   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:16.950996   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.951199   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.951394   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:16.951604   60421 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:16.951829   60421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.240 22 <nil> <nil>}
	I1216 20:59:16.951843   60421 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 20:59:17.060233   60421 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734382757.032597424
	
	I1216 20:59:17.060258   60421 fix.go:216] guest clock: 1734382757.032597424
	I1216 20:59:17.060265   60421 fix.go:229] Guest: 2024-12-16 20:59:17.032597424 +0000 UTC Remote: 2024-12-16 20:59:16.947203535 +0000 UTC m=+274.247918927 (delta=85.393889ms)
	I1216 20:59:17.060290   60421 fix.go:200] guest clock delta is within tolerance: 85.393889ms
	I1216 20:59:17.060294   60421 start.go:83] releasing machines lock for "no-preload-232338", held for 19.811539815s
	I1216 20:59:17.060318   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:59:17.060636   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetIP
	I1216 20:59:17.063346   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:17.063742   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:17.063764   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:17.063900   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:59:17.064419   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:59:17.064647   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:59:17.064766   60421 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 20:59:17.064804   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:17.064897   60421 ssh_runner.go:195] Run: cat /version.json
	I1216 20:59:17.064923   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:17.067687   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:17.067897   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:17.068129   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:17.068166   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:17.068314   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:17.068318   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:17.068343   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:17.068491   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:17.068573   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:17.068754   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:17.068778   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:17.068914   60421 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 20:59:17.069085   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:17.069229   60421 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 20:59:17.149502   60421 ssh_runner.go:195] Run: systemctl --version
	I1216 20:59:17.184981   60421 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 20:59:17.335267   60421 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 20:59:17.344316   60421 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 20:59:17.344381   60421 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 20:59:17.362422   60421 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 20:59:17.362450   60421 start.go:495] detecting cgroup driver to use...
	I1216 20:59:17.362526   60421 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 20:59:17.379285   60421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 20:59:17.394451   60421 docker.go:217] disabling cri-docker service (if available) ...
	I1216 20:59:17.394514   60421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 20:59:17.411856   60421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 20:59:17.428028   60421 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 20:59:17.557602   60421 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 20:59:17.699140   60421 docker.go:233] disabling docker service ...
	I1216 20:59:17.699215   60421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 20:59:17.715236   60421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 20:59:17.729268   60421 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 20:59:17.875729   60421 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 20:59:18.007569   60421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 20:59:18.022940   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 20:59:18.042227   60421 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1216 20:59:18.042292   60421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:18.053011   60421 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 20:59:18.053081   60421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:18.063767   60421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:18.074262   60421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:18.085372   60421 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 20:59:18.098366   60421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:18.113619   60421 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:18.134081   60421 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:18.145276   60421 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 20:59:18.155733   60421 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 20:59:18.155806   60421 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 20:59:18.170492   60421 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 20:59:18.182276   60421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:59:18.291278   60421 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 20:59:18.384618   60421 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 20:59:18.384700   60421 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 20:59:18.390755   60421 start.go:563] Will wait 60s for crictl version
	I1216 20:59:18.390823   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:18.395435   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 20:59:18.439300   60421 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 20:59:18.439390   60421 ssh_runner.go:195] Run: crio --version
	I1216 20:59:18.473976   60421 ssh_runner.go:195] Run: crio --version
	I1216 20:59:18.505262   60421 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1216 20:59:17.086569   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Start
	I1216 20:59:17.086752   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Ensuring networks are active...
	I1216 20:59:17.087656   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Ensuring network default is active
	I1216 20:59:17.088082   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Ensuring network mk-default-k8s-diff-port-327790 is active
	I1216 20:59:17.088482   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Getting domain xml...
	I1216 20:59:17.089219   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Creating domain...
	I1216 20:59:18.413245   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting to get IP...
	I1216 20:59:18.414327   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:18.414794   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:18.414907   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:18.414784   61807 retry.go:31] will retry after 229.952775ms: waiting for machine to come up
	I1216 20:59:18.646270   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:18.646677   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:18.646727   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:18.646654   61807 retry.go:31] will retry after 341.342128ms: waiting for machine to come up
	I1216 20:59:18.989285   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:18.989781   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:18.989809   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:18.989740   61807 retry.go:31] will retry after 311.937657ms: waiting for machine to come up
	I1216 20:59:19.303619   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:19.304189   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:19.304221   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:19.304131   61807 retry.go:31] will retry after 515.638431ms: waiting for machine to come up
	I1216 20:59:19.821478   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:19.821955   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:19.821997   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:19.821900   61807 retry.go:31] will retry after 590.835789ms: waiting for machine to come up
	I1216 20:59:18.506840   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetIP
	I1216 20:59:18.510260   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:18.510654   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:18.510689   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:18.510875   60421 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1216 20:59:18.515632   60421 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 20:59:18.529943   60421 kubeadm.go:883] updating cluster {Name:no-preload-232338 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.32.0 ClusterName:no-preload-232338 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.240 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 20:59:18.530128   60421 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 20:59:18.530184   60421 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 20:59:18.569526   60421 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1216 20:59:18.569555   60421 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.32.0 registry.k8s.io/kube-controller-manager:v1.32.0 registry.k8s.io/kube-scheduler:v1.32.0 registry.k8s.io/kube-proxy:v1.32.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.16-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1216 20:59:18.569650   60421 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:59:18.569669   60421 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.16-0
	I1216 20:59:18.569688   60421 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1216 20:59:18.569651   60421 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.32.0
	I1216 20:59:18.569774   60421 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.32.0
	I1216 20:59:18.569859   60421 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 20:59:18.569859   60421 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1216 20:59:18.570294   60421 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.32.0
	I1216 20:59:18.571577   60421 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.32.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.32.0
	I1216 20:59:18.571602   60421 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.16-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.16-0
	I1216 20:59:18.571582   60421 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:59:18.571585   60421 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.32.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.32.0
	I1216 20:59:18.571583   60421 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1216 20:59:18.571580   60421 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.32.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 20:59:18.571828   60421 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.32.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.32.0
	I1216 20:59:18.571953   60421 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1216 20:59:18.781052   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.32.0
	I1216 20:59:18.783569   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.16-0
	I1216 20:59:18.795901   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 20:59:18.799273   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1216 20:59:18.801098   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.32.0
	I1216 20:59:18.802163   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1216 20:59:18.828334   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.32.0
	I1216 20:59:18.897880   60421 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.32.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.32.0" does not exist at hash "a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5" in container runtime
	I1216 20:59:18.897942   60421 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.32.0
	I1216 20:59:18.898003   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:18.910616   60421 cache_images.go:116] "registry.k8s.io/etcd:3.5.16-0" needs transfer: "registry.k8s.io/etcd:3.5.16-0" does not exist at hash "a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc" in container runtime
	I1216 20:59:18.910665   60421 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.16-0
	I1216 20:59:18.910713   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:18.937699   60421 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.32.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.32.0" does not exist at hash "8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3" in container runtime
	I1216 20:59:18.937753   60421 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 20:59:18.937804   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:18.979455   60421 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.32.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.32.0" does not exist at hash "c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4" in container runtime
	I1216 20:59:18.979500   60421 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.32.0
	I1216 20:59:18.979540   60421 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1216 20:59:18.979555   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:18.979586   60421 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1216 20:59:18.979636   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:19.002472   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:59:19.076177   60421 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.32.0" needs transfer: "registry.k8s.io/kube-proxy:v1.32.0" does not exist at hash "040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08" in container runtime
	I1216 20:59:19.076217   60421 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.32.0
	I1216 20:59:19.076237   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.0
	I1216 20:59:19.076252   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:19.076292   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I1216 20:59:19.076351   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 20:59:19.076408   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1216 20:59:19.076487   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.0
	I1216 20:59:19.076511   60421 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1216 20:59:19.076536   60421 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:59:19.076580   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:19.204766   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:59:19.204846   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1216 20:59:19.204904   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.0
	I1216 20:59:19.204959   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.0
	I1216 20:59:19.205097   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.0
	I1216 20:59:19.205212   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I1216 20:59:19.205285   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 20:59:19.365421   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.0
	I1216 20:59:19.365466   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:59:19.365512   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1216 20:59:19.365620   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.0
	I1216 20:59:19.365652   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.0
	I1216 20:59:19.365771   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 20:59:19.365861   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I1216 20:59:19.539614   60421 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1216 20:59:19.539729   60421 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1216 20:59:19.539740   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.0
	I1216 20:59:19.539740   60421 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.0
	I1216 20:59:19.539817   60421 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.0
	I1216 20:59:19.539839   60421 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.32.0
	I1216 20:59:19.539840   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:59:19.539885   60421 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.32.0
	I1216 20:59:19.539949   60421 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.0
	I1216 20:59:19.540000   60421 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0
	I1216 20:59:19.540029   60421 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.32.0
	I1216 20:59:19.540062   60421 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.16-0
	I1216 20:59:19.555043   60421 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.32.0 (exists)
	I1216 20:59:19.555076   60421 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.32.0
	I1216 20:59:19.555135   60421 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.32.0
	I1216 20:59:19.555251   60421 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1216 20:59:19.630857   60421 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.16-0 (exists)
	I1216 20:59:19.630949   60421 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1216 20:59:19.630983   60421 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.0
	I1216 20:59:19.631030   60421 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.32.0 (exists)
	I1216 20:59:19.631065   60421 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.32.0
	I1216 20:59:19.631104   60421 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.32.0 (exists)
	I1216 20:59:19.631069   60421 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1216 20:59:21.838285   60421 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.32.0: (2.283119694s)
	I1216 20:59:21.838328   60421 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.0 from cache
	I1216 20:59:21.838359   60421 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1216 20:59:21.838394   60421 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.20725659s)
	I1216 20:59:21.838414   60421 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1216 20:59:21.838421   60421 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1216 20:59:21.838361   60421 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.32.0: (2.207274997s)
	I1216 20:59:21.838471   60421 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.32.0 (exists)
	I1216 20:59:20.414932   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:20.415565   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:20.415597   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:20.415502   61807 retry.go:31] will retry after 698.152518ms: waiting for machine to come up
	I1216 20:59:21.115103   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:21.115597   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:21.115627   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:21.115543   61807 retry.go:31] will retry after 891.02308ms: waiting for machine to come up
	I1216 20:59:22.008636   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:22.009070   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:22.009098   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:22.009015   61807 retry.go:31] will retry after 923.634312ms: waiting for machine to come up
	I1216 20:59:22.934238   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:22.934753   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:22.934784   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:22.934697   61807 retry.go:31] will retry after 1.142718367s: waiting for machine to come up
	I1216 20:59:24.078935   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:24.079398   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:24.079429   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:24.079363   61807 retry.go:31] will retry after 1.541033224s: waiting for machine to come up
	I1216 20:59:23.901058   60421 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.062611423s)
	I1216 20:59:23.901091   60421 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1216 20:59:23.901122   60421 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.16-0
	I1216 20:59:23.901169   60421 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.16-0
	I1216 20:59:25.621932   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:25.622401   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:25.622433   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:25.622364   61807 retry.go:31] will retry after 2.600280234s: waiting for machine to come up
	I1216 20:59:28.224296   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:28.224874   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:28.224892   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:28.224828   61807 retry.go:31] will retry after 3.308841216s: waiting for machine to come up
	I1216 20:59:27.793238   60421 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.16-0: (3.892042799s)
	I1216 20:59:27.793280   60421 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 from cache
	I1216 20:59:27.793321   60421 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.32.0
	I1216 20:59:27.793420   60421 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.32.0
	I1216 20:59:29.552069   60421 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.32.0: (1.758623471s)
	I1216 20:59:29.552102   60421 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.0 from cache
	I1216 20:59:29.552130   60421 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.32.0
	I1216 20:59:29.552177   60421 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.32.0
	I1216 20:59:31.708930   60421 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.32.0: (2.156719559s)
	I1216 20:59:31.708971   60421 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.0 from cache
	I1216 20:59:31.709008   60421 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1216 20:59:31.709057   60421 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1216 20:59:32.660657   60421 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1216 20:59:32.660713   60421 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.32.0
	I1216 20:59:32.660775   60421 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.32.0
	I1216 20:59:31.537153   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:31.537735   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:31.537795   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:31.537710   61807 retry.go:31] will retry after 4.259700282s: waiting for machine to come up
	I1216 20:59:37.140408   60933 start.go:364] duration metric: took 4m2.637362394s to acquireMachinesLock for "old-k8s-version-847766"
	I1216 20:59:37.140483   60933 start.go:96] Skipping create...Using existing machine configuration
	I1216 20:59:37.140491   60933 fix.go:54] fixHost starting: 
	I1216 20:59:37.140933   60933 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:59:37.140988   60933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:59:37.159075   60933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39873
	I1216 20:59:37.159574   60933 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:59:37.160140   60933 main.go:141] libmachine: Using API Version  1
	I1216 20:59:37.160172   60933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:59:37.160560   60933 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:59:37.160773   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:37.160889   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetState
	I1216 20:59:37.162561   60933 fix.go:112] recreateIfNeeded on old-k8s-version-847766: state=Stopped err=<nil>
	I1216 20:59:37.162603   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	W1216 20:59:37.162755   60933 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 20:59:37.166031   60933 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-847766" ...
	I1216 20:59:34.634064   60421 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.32.0: (1.973261206s)
	I1216 20:59:34.634117   60421 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.0 from cache
	I1216 20:59:34.634154   60421 cache_images.go:123] Successfully loaded all cached images
	I1216 20:59:34.634160   60421 cache_images.go:92] duration metric: took 16.064590407s to LoadCachedImages
	I1216 20:59:34.634171   60421 kubeadm.go:934] updating node { 192.168.50.240 8443 v1.32.0 crio true true} ...
	I1216 20:59:34.634331   60421 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-232338 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:no-preload-232338 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 20:59:34.634420   60421 ssh_runner.go:195] Run: crio config
	I1216 20:59:34.688034   60421 cni.go:84] Creating CNI manager for ""
	I1216 20:59:34.688059   60421 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 20:59:34.688068   60421 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 20:59:34.688093   60421 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.240 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-232338 NodeName:no-preload-232338 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 20:59:34.688277   60421 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-232338"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.240"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.240"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 20:59:34.688356   60421 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1216 20:59:34.699709   60421 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 20:59:34.699784   60421 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 20:59:34.710306   60421 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1216 20:59:34.732401   60421 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 20:59:34.757561   60421 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1216 20:59:34.776094   60421 ssh_runner.go:195] Run: grep 192.168.50.240	control-plane.minikube.internal$ /etc/hosts
	I1216 20:59:34.780341   60421 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.240	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 20:59:34.794025   60421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:59:34.930543   60421 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 20:59:34.948720   60421 certs.go:68] Setting up /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338 for IP: 192.168.50.240
	I1216 20:59:34.948752   60421 certs.go:194] generating shared ca certs ...
	I1216 20:59:34.948776   60421 certs.go:226] acquiring lock for ca certs: {Name:mk7f8f83a04be3d39897a025f51d4d8228b5a509 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:59:34.949035   60421 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key
	I1216 20:59:34.949094   60421 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key
	I1216 20:59:34.949115   60421 certs.go:256] generating profile certs ...
	I1216 20:59:34.949243   60421 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/client.key
	I1216 20:59:34.949327   60421 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/apiserver.key.674e04e3
	I1216 20:59:34.949379   60421 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/proxy-client.key
	I1216 20:59:34.949509   60421 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem (1338 bytes)
	W1216 20:59:34.949547   60421 certs.go:480] ignoring /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254_empty.pem, impossibly tiny 0 bytes
	I1216 20:59:34.949557   60421 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 20:59:34.949582   60421 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem (1082 bytes)
	I1216 20:59:34.949604   60421 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem (1123 bytes)
	I1216 20:59:34.949627   60421 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem (1679 bytes)
	I1216 20:59:34.949662   60421 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem (1708 bytes)
	I1216 20:59:34.950648   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 20:59:34.994491   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 20:59:35.029853   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 20:59:35.058834   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 20:59:35.096870   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 20:59:35.126467   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 20:59:35.160826   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 20:59:35.186344   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 20:59:35.211125   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /usr/share/ca-certificates/142542.pem (1708 bytes)
	I1216 20:59:35.238705   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 20:59:35.266485   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem --> /usr/share/ca-certificates/14254.pem (1338 bytes)
	I1216 20:59:35.291729   60421 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 20:59:35.311939   60421 ssh_runner.go:195] Run: openssl version
	I1216 20:59:35.318397   60421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142542.pem && ln -fs /usr/share/ca-certificates/142542.pem /etc/ssl/certs/142542.pem"
	I1216 20:59:35.332081   60421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142542.pem
	I1216 20:59:35.336967   60421 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 19:42 /usr/share/ca-certificates/142542.pem
	I1216 20:59:35.337022   60421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142542.pem
	I1216 20:59:35.343307   60421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142542.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 20:59:35.356515   60421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 20:59:35.370380   60421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:59:35.375538   60421 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:59:35.375589   60421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:59:35.381736   60421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 20:59:35.395677   60421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14254.pem && ln -fs /usr/share/ca-certificates/14254.pem /etc/ssl/certs/14254.pem"
	I1216 20:59:35.409029   60421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14254.pem
	I1216 20:59:35.414358   60421 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 19:42 /usr/share/ca-certificates/14254.pem
	I1216 20:59:35.414427   60421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14254.pem
	I1216 20:59:35.421352   60421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14254.pem /etc/ssl/certs/51391683.0"
	I1216 20:59:35.435322   60421 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 20:59:35.440479   60421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 20:59:35.447408   60421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 20:59:35.453992   60421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 20:59:35.460713   60421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 20:59:35.467109   60421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 20:59:35.473412   60421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 20:59:35.479720   60421 kubeadm.go:392] StartCluster: {Name:no-preload-232338 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32
.0 ClusterName:no-preload-232338 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.240 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 20:59:35.479824   60421 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 20:59:35.479901   60421 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 20:59:35.521238   60421 cri.go:89] found id: ""
	I1216 20:59:35.521331   60421 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 20:59:35.534818   60421 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1216 20:59:35.534848   60421 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1216 20:59:35.534893   60421 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 20:59:35.547460   60421 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 20:59:35.548501   60421 kubeconfig.go:125] found "no-preload-232338" server: "https://192.168.50.240:8443"
	I1216 20:59:35.550575   60421 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 20:59:35.560957   60421 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.240
	I1216 20:59:35.561018   60421 kubeadm.go:1160] stopping kube-system containers ...
	I1216 20:59:35.561033   60421 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 20:59:35.561094   60421 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 20:59:35.598970   60421 cri.go:89] found id: ""
	I1216 20:59:35.599082   60421 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 20:59:35.618027   60421 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 20:59:35.629418   60421 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 20:59:35.629455   60421 kubeadm.go:157] found existing configuration files:
	
	I1216 20:59:35.629501   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 20:59:35.639825   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 20:59:35.639896   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 20:59:35.650676   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 20:59:35.662171   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 20:59:35.662228   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 20:59:35.674780   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 20:59:35.686565   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 20:59:35.686640   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 20:59:35.698956   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 20:59:35.710813   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 20:59:35.710874   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 20:59:35.723307   60421 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 20:59:35.734712   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:35.863375   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:37.021512   60421 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.158099337s)
	I1216 20:59:37.021546   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:37.269641   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:37.348978   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:37.428210   60421 api_server.go:52] waiting for apiserver process to appear ...
	I1216 20:59:37.428296   60421 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:59:35.800344   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.800861   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Found IP for machine: 192.168.39.162
	I1216 20:59:35.800889   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has current primary IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.800899   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Reserving static IP address...
	I1216 20:59:35.801367   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-327790", mac: "52:54:00:68:47:d5", ip: "192.168.39.162"} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:35.801395   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Reserved static IP address: 192.168.39.162
	I1216 20:59:35.801419   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | skip adding static IP to network mk-default-k8s-diff-port-327790 - found existing host DHCP lease matching {name: "default-k8s-diff-port-327790", mac: "52:54:00:68:47:d5", ip: "192.168.39.162"}
	I1216 20:59:35.801439   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for SSH to be available...
	I1216 20:59:35.801452   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Getting to WaitForSSH function...
	I1216 20:59:35.803875   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.804226   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:35.804257   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.804407   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Using SSH client type: external
	I1216 20:59:35.804439   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Using SSH private key: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa (-rw-------)
	I1216 20:59:35.804472   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.162 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1216 20:59:35.804493   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | About to run SSH command:
	I1216 20:59:35.804517   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | exit 0
	I1216 20:59:35.935325   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | SSH cmd err, output: <nil>: 
	I1216 20:59:35.935765   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetConfigRaw
	I1216 20:59:35.936442   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetIP
	I1216 20:59:35.938945   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.939369   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:35.939395   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.939654   60829 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/config.json ...
	I1216 20:59:35.939915   60829 machine.go:93] provisionDockerMachine start ...
	I1216 20:59:35.939938   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:59:35.940183   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:35.942412   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.942758   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:35.942787   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.942885   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:35.943067   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:35.943205   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:35.943347   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:35.943501   60829 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:35.943687   60829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1216 20:59:35.943697   60829 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 20:59:36.060257   60829 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1216 20:59:36.060297   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetMachineName
	I1216 20:59:36.060608   60829 buildroot.go:166] provisioning hostname "default-k8s-diff-port-327790"
	I1216 20:59:36.060634   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetMachineName
	I1216 20:59:36.060853   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:36.063758   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.064060   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:36.064097   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.064222   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:36.064427   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.064600   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.064745   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:36.064910   60829 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:36.065132   60829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1216 20:59:36.065151   60829 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-327790 && echo "default-k8s-diff-port-327790" | sudo tee /etc/hostname
	I1216 20:59:36.194522   60829 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-327790
	
	I1216 20:59:36.194555   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:36.197422   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.197770   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:36.197818   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.198007   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:36.198217   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.198446   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.198606   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:36.198803   60829 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:36.199037   60829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1216 20:59:36.199062   60829 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-327790' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-327790/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-327790' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 20:59:36.320779   60829 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 20:59:36.320808   60829 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20091-7083/.minikube CaCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20091-7083/.minikube}
	I1216 20:59:36.320833   60829 buildroot.go:174] setting up certificates
	I1216 20:59:36.320845   60829 provision.go:84] configureAuth start
	I1216 20:59:36.320854   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetMachineName
	I1216 20:59:36.321171   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetIP
	I1216 20:59:36.323701   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.324019   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:36.324044   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.324254   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:36.326002   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.326317   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:36.326348   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.326478   60829 provision.go:143] copyHostCerts
	I1216 20:59:36.326555   60829 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem, removing ...
	I1216 20:59:36.326567   60829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem
	I1216 20:59:36.326635   60829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem (1082 bytes)
	I1216 20:59:36.326747   60829 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem, removing ...
	I1216 20:59:36.326759   60829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem
	I1216 20:59:36.326786   60829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem (1123 bytes)
	I1216 20:59:36.326856   60829 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem, removing ...
	I1216 20:59:36.326866   60829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem
	I1216 20:59:36.326887   60829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem (1679 bytes)
	I1216 20:59:36.326949   60829 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-327790 san=[127.0.0.1 192.168.39.162 default-k8s-diff-port-327790 localhost minikube]
	I1216 20:59:36.480215   60829 provision.go:177] copyRemoteCerts
	I1216 20:59:36.480278   60829 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 20:59:36.480304   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:36.482859   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.483213   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:36.483258   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.483500   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:36.483712   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.483903   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:36.484087   60829 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 20:59:36.571252   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 20:59:36.599399   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1216 20:59:36.624194   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 20:59:36.649294   60829 provision.go:87] duration metric: took 328.437433ms to configureAuth
	I1216 20:59:36.649325   60829 buildroot.go:189] setting minikube options for container-runtime
	I1216 20:59:36.649494   60829 config.go:182] Loaded profile config "default-k8s-diff-port-327790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 20:59:36.649567   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:36.652411   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.652838   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:36.652868   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.653006   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:36.653264   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.653490   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.653704   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:36.653879   60829 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:36.654059   60829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1216 20:59:36.654076   60829 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 20:59:36.893006   60829 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 20:59:36.893043   60829 machine.go:96] duration metric: took 953.113126ms to provisionDockerMachine
	I1216 20:59:36.893057   60829 start.go:293] postStartSetup for "default-k8s-diff-port-327790" (driver="kvm2")
	I1216 20:59:36.893070   60829 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 20:59:36.893101   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:59:36.893466   60829 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 20:59:36.893494   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:36.896151   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.896531   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:36.896561   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.896683   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:36.896893   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.897100   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:36.897280   60829 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 20:59:36.982077   60829 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 20:59:36.986598   60829 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 20:59:36.986624   60829 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/addons for local assets ...
	I1216 20:59:36.986702   60829 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/files for local assets ...
	I1216 20:59:36.986795   60829 filesync.go:149] local asset: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem -> 142542.pem in /etc/ssl/certs
	I1216 20:59:36.986919   60829 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 20:59:36.996453   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /etc/ssl/certs/142542.pem (1708 bytes)
	I1216 20:59:37.021838   60829 start.go:296] duration metric: took 128.770799ms for postStartSetup
	I1216 20:59:37.021873   60829 fix.go:56] duration metric: took 19.961410312s for fixHost
	I1216 20:59:37.021896   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:37.024668   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.025171   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:37.025207   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.025369   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:37.025591   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:37.025746   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:37.025892   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:37.026040   60829 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:37.026257   60829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1216 20:59:37.026273   60829 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 20:59:37.140228   60829 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734382777.110726967
	
	I1216 20:59:37.140254   60829 fix.go:216] guest clock: 1734382777.110726967
	I1216 20:59:37.140264   60829 fix.go:229] Guest: 2024-12-16 20:59:37.110726967 +0000 UTC Remote: 2024-12-16 20:59:37.021877328 +0000 UTC m=+246.706572335 (delta=88.849639ms)
	I1216 20:59:37.140308   60829 fix.go:200] guest clock delta is within tolerance: 88.849639ms
	I1216 20:59:37.140315   60829 start.go:83] releasing machines lock for "default-k8s-diff-port-327790", held for 20.079880217s
	I1216 20:59:37.140347   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:59:37.140632   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetIP
	I1216 20:59:37.143268   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.143748   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:37.143775   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.143983   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:59:37.144601   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:59:37.144789   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:59:37.144883   60829 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 20:59:37.144930   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:37.145028   60829 ssh_runner.go:195] Run: cat /version.json
	I1216 20:59:37.145060   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:37.147817   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.148192   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:37.148219   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.148315   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.148364   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:37.148576   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:37.148755   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:37.148776   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.148804   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:37.148964   60829 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 20:59:37.149020   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:37.149141   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:37.149285   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:37.149439   60829 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 20:59:37.232354   60829 ssh_runner.go:195] Run: systemctl --version
	I1216 20:59:37.261803   60829 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 20:59:37.416094   60829 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 20:59:37.425458   60829 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 20:59:37.425566   60829 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 20:59:37.448873   60829 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 20:59:37.448914   60829 start.go:495] detecting cgroup driver to use...
	I1216 20:59:37.449014   60829 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 20:59:37.472474   60829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 20:59:37.492445   60829 docker.go:217] disabling cri-docker service (if available) ...
	I1216 20:59:37.492518   60829 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 20:59:37.510478   60829 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 20:59:37.525452   60829 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 20:59:37.642105   60829 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 20:59:37.814506   60829 docker.go:233] disabling docker service ...
	I1216 20:59:37.814590   60829 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 20:59:37.829046   60829 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 20:59:37.845049   60829 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 20:59:38.009931   60829 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 20:59:38.158000   60829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 20:59:38.174376   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 20:59:38.197489   60829 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1216 20:59:38.197555   60829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:38.213974   60829 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 20:59:38.214034   60829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:38.230383   60829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:38.244599   60829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:38.257574   60829 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 20:59:38.273377   60829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:38.285854   60829 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:38.312687   60829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:38.329105   60829 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 20:59:38.343596   60829 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 20:59:38.343679   60829 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 20:59:38.362530   60829 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 20:59:38.374384   60829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:59:38.564793   60829 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 20:59:38.682792   60829 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 20:59:38.682873   60829 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 20:59:38.689164   60829 start.go:563] Will wait 60s for crictl version
	I1216 20:59:38.689251   60829 ssh_runner.go:195] Run: which crictl
	I1216 20:59:38.693994   60829 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 20:59:38.746808   60829 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 20:59:38.746913   60829 ssh_runner.go:195] Run: crio --version
	I1216 20:59:38.788490   60829 ssh_runner.go:195] Run: crio --version
	I1216 20:59:38.823957   60829 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1216 20:59:37.167470   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .Start
	I1216 20:59:37.167715   60933 main.go:141] libmachine: (old-k8s-version-847766) Ensuring networks are active...
	I1216 20:59:37.168626   60933 main.go:141] libmachine: (old-k8s-version-847766) Ensuring network default is active
	I1216 20:59:37.169114   60933 main.go:141] libmachine: (old-k8s-version-847766) Ensuring network mk-old-k8s-version-847766 is active
	I1216 20:59:37.169670   60933 main.go:141] libmachine: (old-k8s-version-847766) Getting domain xml...
	I1216 20:59:37.170345   60933 main.go:141] libmachine: (old-k8s-version-847766) Creating domain...
	I1216 20:59:38.535579   60933 main.go:141] libmachine: (old-k8s-version-847766) Waiting to get IP...
	I1216 20:59:38.536661   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:38.537089   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:38.537174   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:38.537078   61973 retry.go:31] will retry after 277.62307ms: waiting for machine to come up
	I1216 20:59:38.816788   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:38.817329   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:38.817360   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:38.817272   61973 retry.go:31] will retry after 346.694382ms: waiting for machine to come up
	I1216 20:59:39.165778   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:39.166377   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:39.166436   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:39.166355   61973 retry.go:31] will retry after 416.599295ms: waiting for machine to come up
	I1216 20:59:38.825413   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetIP
	I1216 20:59:38.828442   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:38.828836   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:38.828870   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:38.829125   60829 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1216 20:59:38.833715   60829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 20:59:38.848989   60829 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-327790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.32.0 ClusterName:default-k8s-diff-port-327790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 20:59:38.849121   60829 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 20:59:38.849169   60829 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 20:59:38.891356   60829 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1216 20:59:38.891432   60829 ssh_runner.go:195] Run: which lz4
	I1216 20:59:38.896669   60829 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 20:59:38.901209   60829 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 20:59:38.901253   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1216 20:59:37.928929   60421 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:59:38.428939   60421 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:59:38.454184   60421 api_server.go:72] duration metric: took 1.02597754s to wait for apiserver process to appear ...
	I1216 20:59:38.454211   60421 api_server.go:88] waiting for apiserver healthz status ...
	I1216 20:59:38.454252   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:38.454842   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": dial tcp 192.168.50.240:8443: connect: connection refused
	I1216 20:59:38.954378   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:39.585259   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:39.585762   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:39.585791   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:39.585737   61973 retry.go:31] will retry after 526.969594ms: waiting for machine to come up
	I1216 20:59:40.114653   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:40.115175   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:40.115205   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:40.115140   61973 retry.go:31] will retry after 502.283372ms: waiting for machine to come up
	I1216 20:59:40.619067   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:40.619633   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:40.619682   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:40.619571   61973 retry.go:31] will retry after 764.799982ms: waiting for machine to come up
	I1216 20:59:41.385515   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:41.386066   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:41.386100   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:41.386027   61973 retry.go:31] will retry after 982.237202ms: waiting for machine to come up
	I1216 20:59:42.369934   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:42.370414   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:42.370449   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:42.370373   61973 retry.go:31] will retry after 1.163280736s: waiting for machine to come up
	I1216 20:59:43.534829   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:43.535194   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:43.535224   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:43.535143   61973 retry.go:31] will retry after 1.630958514s: waiting for machine to come up
	I1216 20:59:40.539994   60829 crio.go:462] duration metric: took 1.643361409s to copy over tarball
	I1216 20:59:40.540066   60829 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 20:59:42.840346   60829 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.30025199s)
	I1216 20:59:42.840382   60829 crio.go:469] duration metric: took 2.300357568s to extract the tarball
	I1216 20:59:42.840392   60829 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 20:59:42.881650   60829 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 20:59:42.928089   60829 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 20:59:42.928120   60829 cache_images.go:84] Images are preloaded, skipping loading
	I1216 20:59:42.928129   60829 kubeadm.go:934] updating node { 192.168.39.162 8444 v1.32.0 crio true true} ...
	I1216 20:59:42.928222   60829 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-327790 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:default-k8s-diff-port-327790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 20:59:42.928286   60829 ssh_runner.go:195] Run: crio config
	I1216 20:59:42.983315   60829 cni.go:84] Creating CNI manager for ""
	I1216 20:59:42.983348   60829 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 20:59:42.983360   60829 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 20:59:42.983396   60829 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.162 APIServerPort:8444 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-327790 NodeName:default-k8s-diff-port-327790 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.162"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.162 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 20:59:42.983556   60829 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.162
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-327790"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.162"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.162"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 20:59:42.983631   60829 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1216 20:59:42.996192   60829 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 20:59:42.996283   60829 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 20:59:43.008389   60829 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1216 20:59:43.027984   60829 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 20:59:43.045672   60829 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1216 20:59:43.063620   60829 ssh_runner.go:195] Run: grep 192.168.39.162	control-plane.minikube.internal$ /etc/hosts
	I1216 20:59:43.067925   60829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.162	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 20:59:43.082946   60829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:59:43.220929   60829 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 20:59:43.243843   60829 certs.go:68] Setting up /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790 for IP: 192.168.39.162
	I1216 20:59:43.243870   60829 certs.go:194] generating shared ca certs ...
	I1216 20:59:43.243888   60829 certs.go:226] acquiring lock for ca certs: {Name:mk7f8f83a04be3d39897a025f51d4d8228b5a509 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:59:43.244125   60829 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key
	I1216 20:59:43.244185   60829 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key
	I1216 20:59:43.244200   60829 certs.go:256] generating profile certs ...
	I1216 20:59:43.244324   60829 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/client.key
	I1216 20:59:43.244400   60829 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/apiserver.key.0f0bf709
	I1216 20:59:43.244449   60829 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/proxy-client.key
	I1216 20:59:43.244606   60829 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem (1338 bytes)
	W1216 20:59:43.244649   60829 certs.go:480] ignoring /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254_empty.pem, impossibly tiny 0 bytes
	I1216 20:59:43.244666   60829 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 20:59:43.244689   60829 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem (1082 bytes)
	I1216 20:59:43.244711   60829 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem (1123 bytes)
	I1216 20:59:43.244731   60829 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem (1679 bytes)
	I1216 20:59:43.244776   60829 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem (1708 bytes)
	I1216 20:59:43.245449   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 20:59:43.283598   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 20:59:43.309321   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 20:59:43.343071   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 20:59:43.379763   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1216 20:59:43.409794   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 20:59:43.437074   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 20:59:43.462616   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 20:59:43.487711   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 20:59:43.512636   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem --> /usr/share/ca-certificates/14254.pem (1338 bytes)
	I1216 20:59:43.539050   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /usr/share/ca-certificates/142542.pem (1708 bytes)
	I1216 20:59:43.566507   60829 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 20:59:43.584425   60829 ssh_runner.go:195] Run: openssl version
	I1216 20:59:43.590996   60829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 20:59:43.604384   60829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:59:43.609342   60829 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:59:43.609404   60829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:59:43.615902   60829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 20:59:43.627432   60829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14254.pem && ln -fs /usr/share/ca-certificates/14254.pem /etc/ssl/certs/14254.pem"
	I1216 20:59:43.638929   60829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14254.pem
	I1216 20:59:43.644189   60829 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 19:42 /usr/share/ca-certificates/14254.pem
	I1216 20:59:43.644267   60829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14254.pem
	I1216 20:59:43.650550   60829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14254.pem /etc/ssl/certs/51391683.0"
	I1216 20:59:43.662678   60829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142542.pem && ln -fs /usr/share/ca-certificates/142542.pem /etc/ssl/certs/142542.pem"
	I1216 20:59:43.674981   60829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142542.pem
	I1216 20:59:43.680022   60829 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 19:42 /usr/share/ca-certificates/142542.pem
	I1216 20:59:43.680113   60829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142542.pem
	I1216 20:59:43.686159   60829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142542.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 20:59:43.697897   60829 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 20:59:43.702835   60829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 20:59:43.709262   60829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 20:59:43.716370   60829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 20:59:43.725031   60829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 20:59:43.732876   60829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 20:59:43.739810   60829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 20:59:43.746998   60829 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-327790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.32.0 ClusterName:default-k8s-diff-port-327790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 20:59:43.747131   60829 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 20:59:43.747189   60829 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 20:59:43.791895   60829 cri.go:89] found id: ""
	I1216 20:59:43.791979   60829 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 20:59:43.802858   60829 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1216 20:59:43.802886   60829 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1216 20:59:43.802943   60829 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 20:59:43.813313   60829 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 20:59:43.814296   60829 kubeconfig.go:125] found "default-k8s-diff-port-327790" server: "https://192.168.39.162:8444"
	I1216 20:59:43.816374   60829 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 20:59:43.825834   60829 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.162
	I1216 20:59:43.825871   60829 kubeadm.go:1160] stopping kube-system containers ...
	I1216 20:59:43.825884   60829 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 20:59:43.825934   60829 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 20:59:43.870890   60829 cri.go:89] found id: ""
	I1216 20:59:43.870965   60829 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 20:59:43.888155   60829 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 20:59:43.898356   60829 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 20:59:43.898381   60829 kubeadm.go:157] found existing configuration files:
	
	I1216 20:59:43.898445   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1216 20:59:43.908232   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 20:59:43.908310   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 20:59:43.918637   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1216 20:59:43.928255   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 20:59:43.928343   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 20:59:43.938479   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1216 20:59:43.948085   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 20:59:43.948157   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 20:59:43.959080   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1216 20:59:43.969218   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 20:59:43.969275   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 20:59:43.980063   60829 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 20:59:43.990768   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:44.125741   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:44.845177   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:45.049512   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:45.162055   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:45.284927   60829 api_server.go:52] waiting for apiserver process to appear ...
	I1216 20:59:45.285036   60829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:59:43.954985   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 20:59:43.955087   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:45.168144   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:45.168719   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:45.168750   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:45.168671   61973 retry.go:31] will retry after 1.835631107s: waiting for machine to come up
	I1216 20:59:47.005854   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:47.006380   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:47.006422   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:47.006339   61973 retry.go:31] will retry after 1.943800898s: waiting for machine to come up
	I1216 20:59:48.951552   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:48.952050   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:48.952114   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:48.952008   61973 retry.go:31] will retry after 2.949898251s: waiting for machine to come up
	I1216 20:59:45.785964   60829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:59:46.285989   60829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:59:46.339555   60829 api_server.go:72] duration metric: took 1.054628295s to wait for apiserver process to appear ...
	I1216 20:59:46.339597   60829 api_server.go:88] waiting for apiserver healthz status ...
	I1216 20:59:46.339636   60829 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1216 20:59:46.340197   60829 api_server.go:269] stopped: https://192.168.39.162:8444/healthz: Get "https://192.168.39.162:8444/healthz": dial tcp 192.168.39.162:8444: connect: connection refused
	I1216 20:59:46.839771   60829 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1216 20:59:49.461907   60829 api_server.go:279] https://192.168.39.162:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 20:59:49.461943   60829 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 20:59:49.461958   60829 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1216 20:59:49.513069   60829 api_server.go:279] https://192.168.39.162:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 20:59:49.513121   60829 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 20:59:49.840517   60829 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1216 20:59:49.846051   60829 api_server.go:279] https://192.168.39.162:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 20:59:49.846086   60829 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 20:59:50.339824   60829 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1216 20:59:50.347663   60829 api_server.go:279] https://192.168.39.162:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 20:59:50.347708   60829 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 20:59:50.840385   60829 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1216 20:59:50.844943   60829 api_server.go:279] https://192.168.39.162:8444/healthz returned 200:
	ok
	I1216 20:59:50.854518   60829 api_server.go:141] control plane version: v1.32.0
	I1216 20:59:50.854546   60829 api_server.go:131] duration metric: took 4.514941385s to wait for apiserver health ...
	I1216 20:59:50.854554   60829 cni.go:84] Creating CNI manager for ""
	I1216 20:59:50.854560   60829 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 20:59:50.856538   60829 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 20:59:48.956352   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 20:59:48.956414   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:51.905108   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:51.905560   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:51.905594   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:51.905505   61973 retry.go:31] will retry after 3.44069953s: waiting for machine to come up
	I1216 20:59:50.858169   60829 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 20:59:50.882809   60829 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 20:59:50.912787   60829 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 20:59:50.933650   60829 system_pods.go:59] 8 kube-system pods found
	I1216 20:59:50.933693   60829 system_pods.go:61] "coredns-668d6bf9bc-tqh9s" [56b4db37-b6bc-49eb-b45f-b8b4d1f16eed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 20:59:50.933705   60829 system_pods.go:61] "etcd-default-k8s-diff-port-327790" [067f7c41-3763-42d3-af06-ad50fad3d206] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 20:59:50.933713   60829 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-327790" [f1964b5b-9d2b-4f82-afc6-2f359c9b8827] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 20:59:50.933722   60829 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-327790" [fd7479e3-be26-4bb0-b53a-e40766a33996] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 20:59:50.933742   60829 system_pods.go:61] "kube-proxy-mplxr" [027abdc5-7022-4528-a93f-36f3b10115ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 20:59:50.933751   60829 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-327790" [d7416a53-ccb4-46fd-9992-46cbf7ec0a3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 20:59:50.933763   60829 system_pods.go:61] "metrics-server-f79f97bbb-hlt7s" [d42906e3-387c-493e-9d06-5bb654dc9784] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 20:59:50.933772   60829 system_pods.go:61] "storage-provisioner" [c774635a-faca-4a1a-8f4e-2161447ebaa1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 20:59:50.933785   60829 system_pods.go:74] duration metric: took 20.968988ms to wait for pod list to return data ...
	I1216 20:59:50.933804   60829 node_conditions.go:102] verifying NodePressure condition ...
	I1216 20:59:50.937958   60829 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 20:59:50.937986   60829 node_conditions.go:123] node cpu capacity is 2
	I1216 20:59:50.938008   60829 node_conditions.go:105] duration metric: took 4.196302ms to run NodePressure ...
	I1216 20:59:50.938030   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:51.231412   60829 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1216 20:59:51.236005   60829 kubeadm.go:739] kubelet initialised
	I1216 20:59:51.236029   60829 kubeadm.go:740] duration metric: took 4.585977ms waiting for restarted kubelet to initialise ...
	I1216 20:59:51.236042   60829 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 20:59:51.243608   60829 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-tqh9s" in "kube-system" namespace to be "Ready" ...
	I1216 20:59:53.250907   60829 pod_ready.go:103] pod "coredns-668d6bf9bc-tqh9s" in "kube-system" namespace has status "Ready":"False"
	I1216 20:59:56.696377   60215 start.go:364] duration metric: took 54.44579772s to acquireMachinesLock for "embed-certs-606219"
	I1216 20:59:56.696450   60215 start.go:96] Skipping create...Using existing machine configuration
	I1216 20:59:56.696470   60215 fix.go:54] fixHost starting: 
	I1216 20:59:56.696862   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:59:56.696902   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:59:56.714627   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42069
	I1216 20:59:56.715074   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:59:56.715599   60215 main.go:141] libmachine: Using API Version  1
	I1216 20:59:56.715629   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:59:56.715953   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:59:56.716116   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 20:59:56.716252   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetState
	I1216 20:59:56.717876   60215 fix.go:112] recreateIfNeeded on embed-certs-606219: state=Stopped err=<nil>
	I1216 20:59:56.717902   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	W1216 20:59:56.718088   60215 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 20:59:56.720072   60215 out.go:177] * Restarting existing kvm2 VM for "embed-certs-606219" ...
	I1216 20:59:53.957328   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 20:59:53.957395   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:55.349557   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.350105   60933 main.go:141] libmachine: (old-k8s-version-847766) Found IP for machine: 192.168.72.240
	I1216 20:59:55.350129   60933 main.go:141] libmachine: (old-k8s-version-847766) Reserving static IP address...
	I1216 20:59:55.350140   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has current primary IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.350574   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "old-k8s-version-847766", mac: "52:54:00:c4:f2:8a", ip: "192.168.72.240"} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.350608   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | skip adding static IP to network mk-old-k8s-version-847766 - found existing host DHCP lease matching {name: "old-k8s-version-847766", mac: "52:54:00:c4:f2:8a", ip: "192.168.72.240"}
	I1216 20:59:55.350623   60933 main.go:141] libmachine: (old-k8s-version-847766) Reserved static IP address: 192.168.72.240
	I1216 20:59:55.350646   60933 main.go:141] libmachine: (old-k8s-version-847766) Waiting for SSH to be available...
	I1216 20:59:55.350662   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | Getting to WaitForSSH function...
	I1216 20:59:55.353011   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.353346   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.353369   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.353535   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | Using SSH client type: external
	I1216 20:59:55.353560   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | Using SSH private key: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa (-rw-------)
	I1216 20:59:55.353592   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1216 20:59:55.353606   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | About to run SSH command:
	I1216 20:59:55.353621   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | exit 0
	I1216 20:59:55.480726   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | SSH cmd err, output: <nil>: 
	I1216 20:59:55.481062   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetConfigRaw
	I1216 20:59:55.481692   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetIP
	I1216 20:59:55.484113   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.484500   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.484537   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.484769   60933 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/config.json ...
	I1216 20:59:55.484985   60933 machine.go:93] provisionDockerMachine start ...
	I1216 20:59:55.485008   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:55.485220   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:55.487511   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.487835   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.487862   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.487958   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:55.488134   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:55.488268   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:55.488405   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:55.488546   60933 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:55.488725   60933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I1216 20:59:55.488735   60933 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 20:59:55.596092   60933 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1216 20:59:55.596127   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetMachineName
	I1216 20:59:55.596401   60933 buildroot.go:166] provisioning hostname "old-k8s-version-847766"
	I1216 20:59:55.596426   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetMachineName
	I1216 20:59:55.596644   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:55.599286   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.599631   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.599662   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.599814   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:55.600010   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:55.600166   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:55.600299   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:55.600462   60933 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:55.600665   60933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I1216 20:59:55.600678   60933 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-847766 && echo "old-k8s-version-847766" | sudo tee /etc/hostname
	I1216 20:59:55.731851   60933 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-847766
	
	I1216 20:59:55.731879   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:55.734802   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.735155   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.735186   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.735422   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:55.735650   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:55.735815   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:55.736030   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:55.736194   60933 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:55.736377   60933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I1216 20:59:55.736393   60933 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-847766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-847766/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-847766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 20:59:55.857050   60933 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 20:59:55.857108   60933 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20091-7083/.minikube CaCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20091-7083/.minikube}
	I1216 20:59:55.857138   60933 buildroot.go:174] setting up certificates
	I1216 20:59:55.857163   60933 provision.go:84] configureAuth start
	I1216 20:59:55.857180   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetMachineName
	I1216 20:59:55.857505   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetIP
	I1216 20:59:55.860286   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.860613   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.860643   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.860826   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:55.863292   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.863682   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.863709   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.863871   60933 provision.go:143] copyHostCerts
	I1216 20:59:55.863920   60933 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem, removing ...
	I1216 20:59:55.863929   60933 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem
	I1216 20:59:55.863986   60933 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem (1679 bytes)
	I1216 20:59:55.864069   60933 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem, removing ...
	I1216 20:59:55.864077   60933 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem
	I1216 20:59:55.864104   60933 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem (1082 bytes)
	I1216 20:59:55.864159   60933 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem, removing ...
	I1216 20:59:55.864177   60933 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem
	I1216 20:59:55.864202   60933 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem (1123 bytes)
	I1216 20:59:55.864250   60933 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-847766 san=[127.0.0.1 192.168.72.240 localhost minikube old-k8s-version-847766]
	I1216 20:59:56.058548   60933 provision.go:177] copyRemoteCerts
	I1216 20:59:56.058603   60933 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 20:59:56.058638   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:56.061354   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.061666   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.061707   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.061838   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:56.062039   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.062200   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:56.062356   60933 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa Username:docker}
	I1216 20:59:56.146788   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 20:59:56.172789   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1216 20:59:56.198040   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 20:59:56.222476   60933 provision.go:87] duration metric: took 365.299433ms to configureAuth
	I1216 20:59:56.222505   60933 buildroot.go:189] setting minikube options for container-runtime
	I1216 20:59:56.222706   60933 config.go:182] Loaded profile config "old-k8s-version-847766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1216 20:59:56.222790   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:56.225376   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.225752   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.225779   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.225965   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:56.226182   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.226363   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.226516   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:56.226687   60933 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:56.226887   60933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I1216 20:59:56.226906   60933 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 20:59:56.451434   60933 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 20:59:56.451464   60933 machine.go:96] duration metric: took 966.463181ms to provisionDockerMachine
	I1216 20:59:56.451478   60933 start.go:293] postStartSetup for "old-k8s-version-847766" (driver="kvm2")
	I1216 20:59:56.451513   60933 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 20:59:56.451541   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:56.451926   60933 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 20:59:56.451980   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:56.454840   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.455302   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.455331   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.455454   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:56.455661   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.455814   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:56.455988   60933 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa Username:docker}
	I1216 20:59:56.542904   60933 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 20:59:56.547362   60933 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 20:59:56.547389   60933 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/addons for local assets ...
	I1216 20:59:56.547467   60933 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/files for local assets ...
	I1216 20:59:56.547568   60933 filesync.go:149] local asset: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem -> 142542.pem in /etc/ssl/certs
	I1216 20:59:56.547677   60933 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 20:59:56.557902   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /etc/ssl/certs/142542.pem (1708 bytes)
	I1216 20:59:56.582796   60933 start.go:296] duration metric: took 131.303406ms for postStartSetup
	I1216 20:59:56.582846   60933 fix.go:56] duration metric: took 19.442354832s for fixHost
	I1216 20:59:56.582872   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:56.585478   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.585803   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.585831   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.586011   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:56.586194   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.586358   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.586472   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:56.586640   60933 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:56.586809   60933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I1216 20:59:56.586819   60933 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 20:59:56.696254   60933 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734382796.650794736
	
	I1216 20:59:56.696274   60933 fix.go:216] guest clock: 1734382796.650794736
	I1216 20:59:56.696281   60933 fix.go:229] Guest: 2024-12-16 20:59:56.650794736 +0000 UTC Remote: 2024-12-16 20:59:56.582851742 +0000 UTC m=+262.230512454 (delta=67.942994ms)
	I1216 20:59:56.696299   60933 fix.go:200] guest clock delta is within tolerance: 67.942994ms
	I1216 20:59:56.696304   60933 start.go:83] releasing machines lock for "old-k8s-version-847766", held for 19.555844424s
	I1216 20:59:56.696333   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:56.696608   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetIP
	I1216 20:59:56.699486   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.699932   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.699964   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.700068   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:56.700645   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:56.700846   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:56.700948   60933 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 20:59:56.701007   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:56.701115   60933 ssh_runner.go:195] Run: cat /version.json
	I1216 20:59:56.701140   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:56.703937   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.704117   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.704314   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.704342   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.704496   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:56.704567   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.704601   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.704680   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.704762   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:56.704836   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:56.704982   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.704987   60933 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa Username:docker}
	I1216 20:59:56.705134   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:56.705259   60933 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa Username:docker}
	I1216 20:59:56.784295   60933 ssh_runner.go:195] Run: systemctl --version
	I1216 20:59:56.817481   60933 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 20:59:56.968124   60933 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 20:59:56.979827   60933 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 20:59:56.979892   60933 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 20:59:56.997867   60933 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 20:59:56.997891   60933 start.go:495] detecting cgroup driver to use...
	I1216 20:59:56.997954   60933 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 20:59:57.016064   60933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 20:59:57.031596   60933 docker.go:217] disabling cri-docker service (if available) ...
	I1216 20:59:57.031665   60933 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 20:59:57.047562   60933 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 20:59:57.062737   60933 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 20:59:57.183918   60933 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 20:59:57.354699   60933 docker.go:233] disabling docker service ...
	I1216 20:59:57.354794   60933 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 20:59:57.373311   60933 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 20:59:57.390014   60933 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 20:59:57.523623   60933 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 20:59:57.656261   60933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 20:59:57.671374   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 20:59:57.692647   60933 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1216 20:59:57.692709   60933 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:57.704496   60933 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 20:59:57.704548   60933 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:57.715848   60933 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:57.727022   60933 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:57.738899   60933 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 20:59:57.756457   60933 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 20:59:57.773236   60933 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 20:59:57.773289   60933 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 20:59:57.789209   60933 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 20:59:57.800881   60933 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:59:57.927794   60933 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 20:59:58.038173   60933 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 20:59:58.038256   60933 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 20:59:58.044633   60933 start.go:563] Will wait 60s for crictl version
	I1216 20:59:58.044705   60933 ssh_runner.go:195] Run: which crictl
	I1216 20:59:58.048781   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 20:59:58.088449   60933 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 20:59:58.088579   60933 ssh_runner.go:195] Run: crio --version
	I1216 20:59:58.119211   60933 ssh_runner.go:195] Run: crio --version
	I1216 20:59:58.151411   60933 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1216 20:59:58.152582   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetIP
	I1216 20:59:58.155196   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:58.155558   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:58.155587   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:58.155763   60933 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1216 20:59:58.160369   60933 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 20:59:58.174013   60933 kubeadm.go:883] updating cluster {Name:old-k8s-version-847766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-847766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 20:59:58.174155   60933 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1216 20:59:58.174212   60933 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 20:59:58.226674   60933 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1216 20:59:58.226747   60933 ssh_runner.go:195] Run: which lz4
	I1216 20:59:58.231330   60933 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 20:59:58.236178   60933 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 20:59:58.236214   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1216 20:59:56.721746   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Start
	I1216 20:59:56.721946   60215 main.go:141] libmachine: (embed-certs-606219) Ensuring networks are active...
	I1216 20:59:56.722810   60215 main.go:141] libmachine: (embed-certs-606219) Ensuring network default is active
	I1216 20:59:56.723209   60215 main.go:141] libmachine: (embed-certs-606219) Ensuring network mk-embed-certs-606219 is active
	I1216 20:59:56.723644   60215 main.go:141] libmachine: (embed-certs-606219) Getting domain xml...
	I1216 20:59:56.724387   60215 main.go:141] libmachine: (embed-certs-606219) Creating domain...
	I1216 20:59:58.005906   60215 main.go:141] libmachine: (embed-certs-606219) Waiting to get IP...
	I1216 20:59:58.006646   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 20:59:58.007021   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 20:59:58.007136   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 20:59:58.007017   62108 retry.go:31] will retry after 280.124694ms: waiting for machine to come up
	I1216 20:59:58.288552   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 20:59:58.289049   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 20:59:58.289078   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 20:59:58.289013   62108 retry.go:31] will retry after 299.873899ms: waiting for machine to come up
	I1216 20:59:58.590757   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 20:59:58.591593   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 20:59:58.591625   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 20:59:58.591487   62108 retry.go:31] will retry after 486.884982ms: waiting for machine to come up
	I1216 20:59:59.079996   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 20:59:59.080618   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 20:59:59.080649   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 20:59:59.080581   62108 retry.go:31] will retry after 608.856993ms: waiting for machine to come up
	I1216 20:59:59.691549   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 20:59:59.692107   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 20:59:59.692139   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 20:59:59.692064   62108 retry.go:31] will retry after 730.774006ms: waiting for machine to come up
	I1216 20:59:55.752607   60829 pod_ready.go:103] pod "coredns-668d6bf9bc-tqh9s" in "kube-system" namespace has status "Ready":"False"
	I1216 20:59:58.251902   60829 pod_ready.go:103] pod "coredns-668d6bf9bc-tqh9s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:00.254126   60829 pod_ready.go:103] pod "coredns-668d6bf9bc-tqh9s" in "kube-system" namespace has status "Ready":"False"
	I1216 20:59:58.958114   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 20:59:58.958161   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:59.567722   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": read tcp 192.168.50.1:38738->192.168.50.240:8443: read: connection reset by peer
	I1216 20:59:59.567773   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:59.568271   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": dial tcp 192.168.50.240:8443: connect: connection refused
	I1216 20:59:59.954745   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:59.955447   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": dial tcp 192.168.50.240:8443: connect: connection refused
	I1216 21:00:00.455116   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:00.456036   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": dial tcp 192.168.50.240:8443: connect: connection refused
	I1216 21:00:00.954418   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:00.100507   60933 crio.go:462] duration metric: took 1.869217257s to copy over tarball
	I1216 21:00:00.100619   60933 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 21:00:03.581430   60933 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.480755636s)
	I1216 21:00:03.581469   60933 crio.go:469] duration metric: took 3.480924144s to extract the tarball
	I1216 21:00:03.581478   60933 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 21:00:03.627932   60933 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 21:00:03.667985   60933 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1216 21:00:03.668013   60933 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1216 21:00:03.668078   60933 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 21:00:03.668110   60933 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:03.668207   60933 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:03.668262   60933 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1216 21:00:03.668262   60933 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:03.668332   60933 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1216 21:00:03.668215   60933 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:03.668092   60933 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:03.670096   60933 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:03.670294   60933 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:03.670305   60933 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:03.670305   60933 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:03.670333   60933 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1216 21:00:03.670394   60933 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1216 21:00:03.670396   60933 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 21:00:03.670467   60933 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:03.861573   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1216 21:00:03.869704   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:03.885911   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:03.904748   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1216 21:00:03.905328   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:03.906138   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:03.936548   60933 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1216 21:00:03.936658   60933 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1216 21:00:03.936736   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.019039   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:04.033811   60933 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1216 21:00:04.033863   60933 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:04.033927   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.082946   60933 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1216 21:00:04.082995   60933 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:04.083008   60933 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1216 21:00:04.083050   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.083055   60933 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1216 21:00:04.083063   60933 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1216 21:00:04.083073   60933 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:04.083133   60933 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1216 21:00:04.083203   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1216 21:00:04.083205   60933 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:04.083306   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.083145   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.083139   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.123434   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:04.123702   60933 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1216 21:00:04.123740   60933 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:04.123786   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.150878   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 21:00:04.155586   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:04.155774   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:04.155877   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1216 21:00:04.155968   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:04.156205   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1216 21:00:04.226110   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:04.226429   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:00.424272   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:00.424766   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:00.424795   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:00.424712   62108 retry.go:31] will retry after 947.177724ms: waiting for machine to come up
	I1216 21:00:01.373798   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:01.374448   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:01.374486   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:01.374376   62108 retry.go:31] will retry after 755.735247ms: waiting for machine to come up
	I1216 21:00:02.132092   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:02.132690   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:02.132716   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:02.132636   62108 retry.go:31] will retry after 1.25933291s: waiting for machine to come up
	I1216 21:00:03.393390   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:03.393951   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:03.393987   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:03.393887   62108 retry.go:31] will retry after 1.654271195s: waiting for machine to come up
	I1216 21:00:00.768561   60829 pod_ready.go:93] pod "coredns-668d6bf9bc-tqh9s" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:00.768603   60829 pod_ready.go:82] duration metric: took 9.524968022s for pod "coredns-668d6bf9bc-tqh9s" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:00.768619   60829 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:02.778467   60829 pod_ready.go:93] pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:02.778507   60829 pod_ready.go:82] duration metric: took 2.009878604s for pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:02.778523   60829 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:03.290454   60829 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:03.290490   60829 pod_ready.go:82] duration metric: took 511.956426ms for pod "kube-apiserver-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:03.290505   60829 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:04.533609   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 21:00:04.533639   60421 api_server.go:103] status: https://192.168.50.240:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 21:00:04.533655   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:04.679801   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 21:00:04.679836   60421 api_server.go:103] status: https://192.168.50.240:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 21:00:04.955306   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:05.723827   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 21:00:05.723870   60421 api_server.go:103] status: https://192.168.50.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 21:00:05.723892   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:05.750638   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 21:00:05.750674   60421 api_server.go:103] status: https://192.168.50.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 21:00:05.955092   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:05.983280   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 21:00:05.983332   60421 api_server.go:103] status: https://192.168.50.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 21:00:06.454742   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:06.467886   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 21:00:06.467924   60421 api_server.go:103] status: https://192.168.50.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 21:00:06.954428   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:06.960039   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 200:
	ok
	I1216 21:00:06.969187   60421 api_server.go:141] control plane version: v1.32.0
	I1216 21:00:06.969231   60421 api_server.go:131] duration metric: took 28.515011952s to wait for apiserver health ...
	I1216 21:00:06.969242   60421 cni.go:84] Creating CNI manager for ""
	I1216 21:00:06.969249   60421 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:00:06.971475   60421 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 21:00:06.973035   60421 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 21:00:06.992348   60421 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 21:00:07.020819   60421 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 21:00:07.035254   60421 system_pods.go:59] 8 kube-system pods found
	I1216 21:00:07.035308   60421 system_pods.go:61] "coredns-668d6bf9bc-snhjf" [c0cf42c8-521a-4d02-9d43-ff7a700b0eca] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:00:07.035321   60421 system_pods.go:61] "etcd-no-preload-232338" [01ca2051-5953-44fd-bfff-40aa16ec7aca] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 21:00:07.035335   60421 system_pods.go:61] "kube-apiserver-no-preload-232338" [f1fbbb3b-a0e5-4200-89ef-67085e51a31d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 21:00:07.035359   60421 system_pods.go:61] "kube-controller-manager-no-preload-232338" [200039ad-1a2c-4dc4-8307-d8c882d69f1b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 21:00:07.035373   60421 system_pods.go:61] "kube-proxy-5mw2b" [8fbddf14-8697-451a-a3c7-873fdd437247] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 21:00:07.035382   60421 system_pods.go:61] "kube-scheduler-no-preload-232338" [1b9a7a43-59fc-44ba-9863-04fb90e6554f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 21:00:07.035396   60421 system_pods.go:61] "metrics-server-f79f97bbb-5xf67" [447144e5-11d8-48f7-b2fd-7ab9fb3c04de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:00:07.035409   60421 system_pods.go:61] "storage-provisioner" [fb293bd2-f5be-4086-b821-ffd7df58dd5e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 21:00:07.035420   60421 system_pods.go:74] duration metric: took 14.571089ms to wait for pod list to return data ...
	I1216 21:00:07.035431   60421 node_conditions.go:102] verifying NodePressure condition ...
	I1216 21:00:07.044467   60421 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 21:00:07.044592   60421 node_conditions.go:123] node cpu capacity is 2
	I1216 21:00:07.044633   60421 node_conditions.go:105] duration metric: took 9.191874ms to run NodePressure ...
	I1216 21:00:07.044668   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:07.388388   60421 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1216 21:00:07.394851   60421 kubeadm.go:739] kubelet initialised
	I1216 21:00:07.394881   60421 kubeadm.go:740] duration metric: took 6.459945ms waiting for restarted kubelet to initialise ...
	I1216 21:00:07.394891   60421 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:00:07.401877   60421 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-snhjf" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:07.410697   60421 pod_ready.go:98] node "no-preload-232338" hosting pod "coredns-668d6bf9bc-snhjf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.410732   60421 pod_ready.go:82] duration metric: took 8.80876ms for pod "coredns-668d6bf9bc-snhjf" in "kube-system" namespace to be "Ready" ...
	E1216 21:00:07.410744   60421 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-232338" hosting pod "coredns-668d6bf9bc-snhjf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.410755   60421 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:07.418118   60421 pod_ready.go:98] node "no-preload-232338" hosting pod "etcd-no-preload-232338" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.418149   60421 pod_ready.go:82] duration metric: took 7.383445ms for pod "etcd-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	E1216 21:00:07.418163   60421 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-232338" hosting pod "etcd-no-preload-232338" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.418172   60421 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:07.427341   60421 pod_ready.go:98] node "no-preload-232338" hosting pod "kube-apiserver-no-preload-232338" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.427414   60421 pod_ready.go:82] duration metric: took 9.234588ms for pod "kube-apiserver-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	E1216 21:00:07.427424   60421 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-232338" hosting pod "kube-apiserver-no-preload-232338" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.427432   60421 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:07.435329   60421 pod_ready.go:98] node "no-preload-232338" hosting pod "kube-controller-manager-no-preload-232338" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.435378   60421 pod_ready.go:82] duration metric: took 7.931923ms for pod "kube-controller-manager-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	E1216 21:00:07.435392   60421 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-232338" hosting pod "kube-controller-manager-no-preload-232338" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.435408   60421 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5mw2b" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:04.457220   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:04.457399   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:04.457507   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:04.457596   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1216 21:00:04.457687   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1216 21:00:04.613834   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:04.613870   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:04.613923   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:04.613931   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:04.613960   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:04.613972   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1216 21:00:04.619915   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1216 21:00:04.791265   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1216 21:00:04.791297   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:04.791315   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1216 21:00:04.791352   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1216 21:00:04.791366   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1216 21:00:04.791384   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1216 21:00:04.836463   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1216 21:00:04.836536   60933 cache_images.go:92] duration metric: took 1.168508622s to LoadCachedImages
	W1216 21:00:04.836633   60933 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I1216 21:00:04.836649   60933 kubeadm.go:934] updating node { 192.168.72.240 8443 v1.20.0 crio true true} ...
	I1216 21:00:04.836781   60933 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-847766 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-847766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 21:00:04.836877   60933 ssh_runner.go:195] Run: crio config
	I1216 21:00:04.898330   60933 cni.go:84] Creating CNI manager for ""
	I1216 21:00:04.898357   60933 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:00:04.898371   60933 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 21:00:04.898396   60933 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.240 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-847766 NodeName:old-k8s-version-847766 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1216 21:00:04.898560   60933 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-847766"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.240
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.240"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 21:00:04.898643   60933 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1216 21:00:04.910946   60933 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 21:00:04.911045   60933 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 21:00:04.923199   60933 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1216 21:00:04.942705   60933 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 21:00:04.976598   60933 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1216 21:00:05.001967   60933 ssh_runner.go:195] Run: grep 192.168.72.240	control-plane.minikube.internal$ /etc/hosts
	I1216 21:00:05.006819   60933 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.240	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 21:00:05.020604   60933 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 21:00:05.143039   60933 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 21:00:05.162507   60933 certs.go:68] Setting up /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766 for IP: 192.168.72.240
	I1216 21:00:05.162535   60933 certs.go:194] generating shared ca certs ...
	I1216 21:00:05.162554   60933 certs.go:226] acquiring lock for ca certs: {Name:mk7f8f83a04be3d39897a025f51d4d8228b5a509 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:00:05.162749   60933 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key
	I1216 21:00:05.162792   60933 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key
	I1216 21:00:05.162803   60933 certs.go:256] generating profile certs ...
	I1216 21:00:05.162907   60933 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/client.key
	I1216 21:00:05.162976   60933 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.key.6c8704df
	I1216 21:00:05.163011   60933 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/proxy-client.key
	I1216 21:00:05.163148   60933 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem (1338 bytes)
	W1216 21:00:05.163176   60933 certs.go:480] ignoring /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254_empty.pem, impossibly tiny 0 bytes
	I1216 21:00:05.163186   60933 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 21:00:05.163210   60933 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem (1082 bytes)
	I1216 21:00:05.163278   60933 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem (1123 bytes)
	I1216 21:00:05.163315   60933 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem (1679 bytes)
	I1216 21:00:05.163379   60933 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem (1708 bytes)
	I1216 21:00:05.164216   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 21:00:05.222491   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 21:00:05.253517   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 21:00:05.294338   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 21:00:05.342850   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1216 21:00:05.388068   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 21:00:05.422591   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 21:00:05.471916   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 21:00:05.505836   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem --> /usr/share/ca-certificates/14254.pem (1338 bytes)
	I1216 21:00:05.539404   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /usr/share/ca-certificates/142542.pem (1708 bytes)
	I1216 21:00:05.570819   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 21:00:05.602079   60933 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 21:00:05.630577   60933 ssh_runner.go:195] Run: openssl version
	I1216 21:00:05.640017   60933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142542.pem && ln -fs /usr/share/ca-certificates/142542.pem /etc/ssl/certs/142542.pem"
	I1216 21:00:05.653759   60933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142542.pem
	I1216 21:00:05.659573   60933 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 19:42 /usr/share/ca-certificates/142542.pem
	I1216 21:00:05.659645   60933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142542.pem
	I1216 21:00:05.666667   60933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142542.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 21:00:05.680061   60933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 21:00:05.692776   60933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:00:05.698644   60933 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:00:05.698728   60933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:00:05.705913   60933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 21:00:05.730062   60933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14254.pem && ln -fs /usr/share/ca-certificates/14254.pem /etc/ssl/certs/14254.pem"
	I1216 21:00:05.750034   60933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14254.pem
	I1216 21:00:05.757158   60933 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 19:42 /usr/share/ca-certificates/14254.pem
	I1216 21:00:05.757252   60933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14254.pem
	I1216 21:00:05.765679   60933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14254.pem /etc/ssl/certs/51391683.0"
	I1216 21:00:05.782537   60933 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 21:00:05.788291   60933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 21:00:05.797707   60933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 21:00:05.807016   60933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 21:00:05.818160   60933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 21:00:05.827428   60933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 21:00:05.836499   60933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 21:00:05.846104   60933 kubeadm.go:392] StartCluster: {Name:old-k8s-version-847766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-847766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 21:00:05.846244   60933 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 21:00:05.846331   60933 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 21:00:05.901274   60933 cri.go:89] found id: ""
	I1216 21:00:05.901376   60933 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 21:00:05.917353   60933 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1216 21:00:05.917381   60933 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1216 21:00:05.917439   60933 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 21:00:05.928587   60933 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 21:00:05.932546   60933 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-847766" does not appear in /home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 21:00:05.933844   60933 kubeconfig.go:62] /home/jenkins/minikube-integration/20091-7083/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-847766" cluster setting kubeconfig missing "old-k8s-version-847766" context setting]
	I1216 21:00:05.935400   60933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/kubeconfig: {Name:mk67073c6dc9abd712825d4490d6430745897f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:00:05.938054   60933 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 21:00:05.950384   60933 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.240
	I1216 21:00:05.950433   60933 kubeadm.go:1160] stopping kube-system containers ...
	I1216 21:00:05.950450   60933 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 21:00:05.950515   60933 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 21:00:05.999495   60933 cri.go:89] found id: ""
	I1216 21:00:05.999588   60933 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 21:00:06.024765   60933 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:00:06.037807   60933 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:00:06.037836   60933 kubeadm.go:157] found existing configuration files:
	
	I1216 21:00:06.037894   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 21:00:06.048926   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:00:06.048997   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:00:06.060167   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 21:00:06.070841   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:00:06.070910   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:00:06.083517   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 21:00:06.099124   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:00:06.099214   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:00:06.110004   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 21:00:06.125600   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:00:06.125668   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:00:06.137212   60933 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 21:00:06.148873   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:06.316611   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:07.220187   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:07.549730   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:07.698864   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:07.815495   60933 api_server.go:52] waiting for apiserver process to appear ...
	I1216 21:00:07.815657   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:08.316003   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:08.816482   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:09.315805   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:05.050699   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:05.051378   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:05.051413   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:05.051296   62108 retry.go:31] will retry after 2.184829789s: waiting for machine to come up
	I1216 21:00:07.237618   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:07.238137   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:07.238166   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:07.238049   62108 retry.go:31] will retry after 2.531717629s: waiting for machine to come up
	I1216 21:00:05.713060   60829 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:05.798544   60829 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:05.798569   60829 pod_ready.go:82] duration metric: took 2.508055323s for pod "kube-controller-manager-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:05.798582   60829 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-mplxr" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:05.805322   60829 pod_ready.go:93] pod "kube-proxy-mplxr" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:05.805361   60829 pod_ready.go:82] duration metric: took 6.77ms for pod "kube-proxy-mplxr" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:05.805399   60829 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:05.812700   60829 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:05.812727   60829 pod_ready.go:82] duration metric: took 7.281992ms for pod "kube-scheduler-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:05.812741   60829 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:07.822004   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:10.321160   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:09.443582   60421 pod_ready.go:103] pod "kube-proxy-5mw2b" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:11.443796   60421 pod_ready.go:103] pod "kube-proxy-5mw2b" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:09.815863   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:10.316664   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:10.815852   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:11.316175   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:11.816446   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:12.316040   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:12.816172   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:13.316460   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:13.815700   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:14.316469   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:09.772318   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:09.772837   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:09.772869   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:09.772797   62108 retry.go:31] will retry after 2.557982234s: waiting for machine to come up
	I1216 21:00:12.331877   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:12.332340   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:12.332368   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:12.332298   62108 retry.go:31] will retry after 4.202991569s: waiting for machine to come up
	I1216 21:00:12.322897   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:14.323015   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:13.942154   60421 pod_ready.go:103] pod "kube-proxy-5mw2b" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:16.442411   60421 pod_ready.go:103] pod "kube-proxy-5mw2b" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:14.816539   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:15.315737   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:15.816465   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:16.316470   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:16.816451   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:17.316485   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:17.816470   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:18.316165   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:18.816448   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:19.315972   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:16.539792   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.540299   60215 main.go:141] libmachine: (embed-certs-606219) Found IP for machine: 192.168.61.151
	I1216 21:00:16.540324   60215 main.go:141] libmachine: (embed-certs-606219) Reserving static IP address...
	I1216 21:00:16.540341   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has current primary IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.540771   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "embed-certs-606219", mac: "52:54:00:63:37:8f", ip: "192.168.61.151"} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:16.540810   60215 main.go:141] libmachine: (embed-certs-606219) DBG | skip adding static IP to network mk-embed-certs-606219 - found existing host DHCP lease matching {name: "embed-certs-606219", mac: "52:54:00:63:37:8f", ip: "192.168.61.151"}
	I1216 21:00:16.540827   60215 main.go:141] libmachine: (embed-certs-606219) Reserved static IP address: 192.168.61.151
	I1216 21:00:16.540839   60215 main.go:141] libmachine: (embed-certs-606219) Waiting for SSH to be available...
	I1216 21:00:16.540847   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Getting to WaitForSSH function...
	I1216 21:00:16.542958   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.543461   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:16.543503   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.543629   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Using SSH client type: external
	I1216 21:00:16.543663   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Using SSH private key: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa (-rw-------)
	I1216 21:00:16.543696   60215 main.go:141] libmachine: (embed-certs-606219) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.151 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1216 21:00:16.543713   60215 main.go:141] libmachine: (embed-certs-606219) DBG | About to run SSH command:
	I1216 21:00:16.543732   60215 main.go:141] libmachine: (embed-certs-606219) DBG | exit 0
	I1216 21:00:16.671576   60215 main.go:141] libmachine: (embed-certs-606219) DBG | SSH cmd err, output: <nil>: 
	I1216 21:00:16.671965   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetConfigRaw
	I1216 21:00:16.672599   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetIP
	I1216 21:00:16.675179   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.675520   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:16.675549   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.675726   60215 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/config.json ...
	I1216 21:00:16.675938   60215 machine.go:93] provisionDockerMachine start ...
	I1216 21:00:16.675955   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:00:16.676186   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:16.678481   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.678824   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:16.678846   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.679020   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:16.679203   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:16.679388   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:16.679530   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:16.679689   60215 main.go:141] libmachine: Using SSH client type: native
	I1216 21:00:16.679883   60215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I1216 21:00:16.679896   60215 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 21:00:16.791925   60215 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1216 21:00:16.791959   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetMachineName
	I1216 21:00:16.792224   60215 buildroot.go:166] provisioning hostname "embed-certs-606219"
	I1216 21:00:16.792261   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetMachineName
	I1216 21:00:16.792492   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:16.794967   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.795359   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:16.795388   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.795496   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:16.795674   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:16.795845   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:16.795995   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:16.796238   60215 main.go:141] libmachine: Using SSH client type: native
	I1216 21:00:16.796466   60215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I1216 21:00:16.796486   60215 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-606219 && echo "embed-certs-606219" | sudo tee /etc/hostname
	I1216 21:00:16.923887   60215 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-606219
	
	I1216 21:00:16.923922   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:16.926689   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.927228   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:16.927283   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.927500   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:16.927724   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:16.927943   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:16.928139   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:16.928396   60215 main.go:141] libmachine: Using SSH client type: native
	I1216 21:00:16.928574   60215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I1216 21:00:16.928590   60215 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-606219' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-606219/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-606219' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 21:00:17.045462   60215 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 21:00:17.045508   60215 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20091-7083/.minikube CaCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20091-7083/.minikube}
	I1216 21:00:17.045540   60215 buildroot.go:174] setting up certificates
	I1216 21:00:17.045560   60215 provision.go:84] configureAuth start
	I1216 21:00:17.045578   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetMachineName
	I1216 21:00:17.045889   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetIP
	I1216 21:00:17.048733   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.049038   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:17.049062   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.049216   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:17.051371   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.051713   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:17.051748   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.051861   60215 provision.go:143] copyHostCerts
	I1216 21:00:17.051940   60215 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem, removing ...
	I1216 21:00:17.051954   60215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem
	I1216 21:00:17.052033   60215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem (1082 bytes)
	I1216 21:00:17.052187   60215 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem, removing ...
	I1216 21:00:17.052203   60215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem
	I1216 21:00:17.052230   60215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem (1123 bytes)
	I1216 21:00:17.052306   60215 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem, removing ...
	I1216 21:00:17.052317   60215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem
	I1216 21:00:17.052342   60215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem (1679 bytes)
	I1216 21:00:17.052413   60215 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem org=jenkins.embed-certs-606219 san=[127.0.0.1 192.168.61.151 embed-certs-606219 localhost minikube]
	I1216 21:00:17.345020   60215 provision.go:177] copyRemoteCerts
	I1216 21:00:17.345079   60215 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 21:00:17.345116   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:17.348019   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.348323   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:17.348350   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.348554   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:17.348783   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:17.348931   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:17.349093   60215 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 21:00:17.434520   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 21:00:17.462097   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1216 21:00:17.488071   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 21:00:17.516428   60215 provision.go:87] duration metric: took 470.851303ms to configureAuth
	I1216 21:00:17.516461   60215 buildroot.go:189] setting minikube options for container-runtime
	I1216 21:00:17.516673   60215 config.go:182] Loaded profile config "embed-certs-606219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 21:00:17.516763   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:17.519637   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.519981   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:17.520019   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.520229   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:17.520451   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:17.520654   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:17.520813   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:17.520977   60215 main.go:141] libmachine: Using SSH client type: native
	I1216 21:00:17.521148   60215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I1216 21:00:17.521166   60215 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 21:00:17.787052   60215 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 21:00:17.787084   60215 machine.go:96] duration metric: took 1.111132885s to provisionDockerMachine
	I1216 21:00:17.787111   60215 start.go:293] postStartSetup for "embed-certs-606219" (driver="kvm2")
	I1216 21:00:17.787126   60215 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 21:00:17.787145   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:00:17.787551   60215 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 21:00:17.787588   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:17.790332   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.790710   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:17.790743   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.790891   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:17.791130   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:17.791336   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:17.791492   60215 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 21:00:17.881548   60215 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 21:00:17.886692   60215 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 21:00:17.886720   60215 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/addons for local assets ...
	I1216 21:00:17.886788   60215 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/files for local assets ...
	I1216 21:00:17.886886   60215 filesync.go:149] local asset: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem -> 142542.pem in /etc/ssl/certs
	I1216 21:00:17.886983   60215 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 21:00:17.897832   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /etc/ssl/certs/142542.pem (1708 bytes)
	I1216 21:00:17.926273   60215 start.go:296] duration metric: took 139.147156ms for postStartSetup
	I1216 21:00:17.926316   60215 fix.go:56] duration metric: took 21.229856025s for fixHost
	I1216 21:00:17.926338   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:17.929204   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.929600   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:17.929623   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.929809   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:17.930036   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:17.930220   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:17.930411   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:17.930554   60215 main.go:141] libmachine: Using SSH client type: native
	I1216 21:00:17.930723   60215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I1216 21:00:17.930734   60215 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 21:00:18.040530   60215 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734382817.988837134
	
	I1216 21:00:18.040557   60215 fix.go:216] guest clock: 1734382817.988837134
	I1216 21:00:18.040590   60215 fix.go:229] Guest: 2024-12-16 21:00:17.988837134 +0000 UTC Remote: 2024-12-16 21:00:17.926320778 +0000 UTC m=+358.266755361 (delta=62.516356ms)
	I1216 21:00:18.040639   60215 fix.go:200] guest clock delta is within tolerance: 62.516356ms
	I1216 21:00:18.040650   60215 start.go:83] releasing machines lock for "embed-certs-606219", held for 21.34422537s
	I1216 21:00:18.040682   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:00:18.040997   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetIP
	I1216 21:00:18.044100   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:18.044549   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:18.044584   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:18.044727   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:00:18.045237   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:00:18.045454   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:00:18.045544   60215 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 21:00:18.045602   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:18.045673   60215 ssh_runner.go:195] Run: cat /version.json
	I1216 21:00:18.045702   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:18.048852   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:18.049066   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:18.049259   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:18.049285   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:18.049423   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:18.049578   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:18.049610   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:18.049611   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:18.049688   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:18.049885   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:18.049908   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:18.050090   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:18.050082   60215 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 21:00:18.050313   60215 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 21:00:18.128381   60215 ssh_runner.go:195] Run: systemctl --version
	I1216 21:00:18.165162   60215 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 21:00:18.313679   60215 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 21:00:18.321330   60215 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 21:00:18.321407   60215 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 21:00:18.340577   60215 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 21:00:18.340601   60215 start.go:495] detecting cgroup driver to use...
	I1216 21:00:18.340672   60215 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 21:00:18.357273   60215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 21:00:18.373169   60215 docker.go:217] disabling cri-docker service (if available) ...
	I1216 21:00:18.373231   60215 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 21:00:18.387904   60215 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 21:00:18.402499   60215 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 21:00:18.528830   60215 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 21:00:18.677746   60215 docker.go:233] disabling docker service ...
	I1216 21:00:18.677839   60215 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 21:00:18.693059   60215 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 21:00:18.707368   60215 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 21:00:18.870936   60215 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 21:00:19.011321   60215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 21:00:19.025645   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 21:00:19.045618   60215 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1216 21:00:19.045695   60215 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:00:19.056739   60215 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 21:00:19.056813   60215 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:00:19.067975   60215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:00:19.078954   60215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:00:19.090165   60215 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 21:00:19.101906   60215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:00:19.112949   60215 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:00:19.131186   60215 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:00:19.142238   60215 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 21:00:19.152768   60215 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 21:00:19.152830   60215 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 21:00:19.169166   60215 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 21:00:19.188991   60215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 21:00:19.319083   60215 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 21:00:19.427266   60215 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 21:00:19.427377   60215 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 21:00:19.432716   60215 start.go:563] Will wait 60s for crictl version
	I1216 21:00:19.432793   60215 ssh_runner.go:195] Run: which crictl
	I1216 21:00:19.437514   60215 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 21:00:19.484613   60215 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 21:00:19.484726   60215 ssh_runner.go:195] Run: crio --version
	I1216 21:00:19.519451   60215 ssh_runner.go:195] Run: crio --version
	I1216 21:00:19.555298   60215 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1216 21:00:19.556696   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetIP
	I1216 21:00:19.559802   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:19.560178   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:19.560201   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:19.560467   60215 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1216 21:00:19.565180   60215 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 21:00:19.579863   60215 kubeadm.go:883] updating cluster {Name:embed-certs-606219 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.32.0 ClusterName:embed-certs-606219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.151 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 21:00:19.579991   60215 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 21:00:19.580037   60215 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 21:00:19.618480   60215 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1216 21:00:19.618556   60215 ssh_runner.go:195] Run: which lz4
	I1216 21:00:19.622839   60215 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 21:00:19.627438   60215 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 21:00:19.627482   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1216 21:00:16.819610   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:19.326427   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:17.942107   60421 pod_ready.go:93] pod "kube-proxy-5mw2b" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:17.942148   60421 pod_ready.go:82] duration metric: took 10.506728599s for pod "kube-proxy-5mw2b" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:17.942161   60421 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:17.948518   60421 pod_ready.go:93] pod "kube-scheduler-no-preload-232338" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:17.948540   60421 pod_ready.go:82] duration metric: took 6.372903ms for pod "kube-scheduler-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:17.948549   60421 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:19.956992   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:21.957271   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:19.815807   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:20.316465   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:20.816461   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:21.316731   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:21.816637   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:22.315727   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:22.816447   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:23.316510   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:23.816408   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:24.316454   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:21.237863   60215 crio.go:462] duration metric: took 1.615059209s to copy over tarball
	I1216 21:00:21.237956   60215 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 21:00:23.572502   60215 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.33450798s)
	I1216 21:00:23.572535   60215 crio.go:469] duration metric: took 2.334633133s to extract the tarball
	I1216 21:00:23.572549   60215 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 21:00:23.613530   60215 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 21:00:23.667777   60215 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 21:00:23.667807   60215 cache_images.go:84] Images are preloaded, skipping loading
	I1216 21:00:23.667815   60215 kubeadm.go:934] updating node { 192.168.61.151 8443 v1.32.0 crio true true} ...
	I1216 21:00:23.667929   60215 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-606219 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.151
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:embed-certs-606219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 21:00:23.668009   60215 ssh_runner.go:195] Run: crio config
	I1216 21:00:23.716162   60215 cni.go:84] Creating CNI manager for ""
	I1216 21:00:23.716184   60215 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:00:23.716192   60215 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 21:00:23.716211   60215 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.151 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-606219 NodeName:embed-certs-606219 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.151"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.151 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 21:00:23.716337   60215 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.151
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-606219"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.151"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.151"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 21:00:23.716393   60215 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1216 21:00:23.727236   60215 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 21:00:23.727337   60215 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 21:00:23.737632   60215 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1216 21:00:23.757380   60215 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 21:00:23.774863   60215 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I1216 21:00:23.795070   60215 ssh_runner.go:195] Run: grep 192.168.61.151	control-plane.minikube.internal$ /etc/hosts
	I1216 21:00:23.799453   60215 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.151	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 21:00:23.814278   60215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 21:00:23.962200   60215 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 21:00:23.981947   60215 certs.go:68] Setting up /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219 for IP: 192.168.61.151
	I1216 21:00:23.981976   60215 certs.go:194] generating shared ca certs ...
	I1216 21:00:23.981999   60215 certs.go:226] acquiring lock for ca certs: {Name:mk7f8f83a04be3d39897a025f51d4d8228b5a509 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:00:23.982156   60215 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key
	I1216 21:00:23.982197   60215 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key
	I1216 21:00:23.982204   60215 certs.go:256] generating profile certs ...
	I1216 21:00:23.982280   60215 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/client.key
	I1216 21:00:23.982336   60215 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/apiserver.key.b346be49
	I1216 21:00:23.982376   60215 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/proxy-client.key
	I1216 21:00:23.982483   60215 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem (1338 bytes)
	W1216 21:00:23.982513   60215 certs.go:480] ignoring /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254_empty.pem, impossibly tiny 0 bytes
	I1216 21:00:23.982523   60215 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 21:00:23.982555   60215 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem (1082 bytes)
	I1216 21:00:23.982582   60215 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem (1123 bytes)
	I1216 21:00:23.982602   60215 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem (1679 bytes)
	I1216 21:00:23.982655   60215 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem (1708 bytes)
	I1216 21:00:23.983524   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 21:00:24.015369   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 21:00:24.043889   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 21:00:24.087807   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 21:00:24.137438   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1216 21:00:24.174859   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 21:00:24.200220   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 21:00:24.225811   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 21:00:24.251567   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /usr/share/ca-certificates/142542.pem (1708 bytes)
	I1216 21:00:24.276737   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 21:00:24.302541   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem --> /usr/share/ca-certificates/14254.pem (1338 bytes)
	I1216 21:00:24.329876   60215 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 21:00:24.350133   60215 ssh_runner.go:195] Run: openssl version
	I1216 21:00:24.356984   60215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142542.pem && ln -fs /usr/share/ca-certificates/142542.pem /etc/ssl/certs/142542.pem"
	I1216 21:00:24.371219   60215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142542.pem
	I1216 21:00:24.376759   60215 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 19:42 /usr/share/ca-certificates/142542.pem
	I1216 21:00:24.376816   60215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142542.pem
	I1216 21:00:24.383725   60215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142542.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 21:00:24.397759   60215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 21:00:24.409836   60215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:00:24.414765   60215 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:00:24.414836   60215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:00:24.421662   60215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 21:00:24.433843   60215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14254.pem && ln -fs /usr/share/ca-certificates/14254.pem /etc/ssl/certs/14254.pem"
	I1216 21:00:24.447839   60215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14254.pem
	I1216 21:00:24.453107   60215 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 19:42 /usr/share/ca-certificates/14254.pem
	I1216 21:00:24.453185   60215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14254.pem
	I1216 21:00:24.459472   60215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14254.pem /etc/ssl/certs/51391683.0"
	I1216 21:00:24.471714   60215 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 21:00:24.476881   60215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 21:00:24.486263   60215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 21:00:24.493146   60215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 21:00:24.500093   60215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 21:00:24.506599   60215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 21:00:24.512946   60215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 21:00:24.519699   60215 kubeadm.go:392] StartCluster: {Name:embed-certs-606219 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32
.0 ClusterName:embed-certs-606219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.151 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 21:00:24.519780   60215 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 21:00:24.519861   60215 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 21:00:24.570867   60215 cri.go:89] found id: ""
	I1216 21:00:24.570952   60215 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 21:00:24.583857   60215 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1216 21:00:24.583887   60215 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1216 21:00:24.583943   60215 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 21:00:24.595709   60215 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 21:00:24.596734   60215 kubeconfig.go:125] found "embed-certs-606219" server: "https://192.168.61.151:8443"
	I1216 21:00:24.598569   60215 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 21:00:24.609876   60215 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.151
	I1216 21:00:24.609905   60215 kubeadm.go:1160] stopping kube-system containers ...
	I1216 21:00:24.609917   60215 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 21:00:24.609964   60215 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 21:00:24.654487   60215 cri.go:89] found id: ""
	I1216 21:00:24.654567   60215 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 21:00:24.676658   60215 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:00:24.689546   60215 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:00:24.689571   60215 kubeadm.go:157] found existing configuration files:
	
	I1216 21:00:24.689615   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 21:00:21.819876   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:23.820061   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:23.957368   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:26.556301   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:24.816467   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:25.315789   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:25.816410   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:26.316537   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:26.816144   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:27.316659   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:27.816126   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:28.316568   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:28.816151   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:29.316485   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:24.700928   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:00:24.701012   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:00:24.713438   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 21:00:24.725184   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:00:24.725257   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:00:24.737483   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 21:00:24.749488   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:00:24.749546   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:00:24.762322   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 21:00:24.774309   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:00:24.774391   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:00:24.787008   60215 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 21:00:24.798394   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:25.009799   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:25.917432   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:26.175602   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:26.279646   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:26.362472   60215 api_server.go:52] waiting for apiserver process to appear ...
	I1216 21:00:26.362564   60215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:26.862646   60215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:27.362663   60215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:27.421335   60215 api_server.go:72] duration metric: took 1.058863872s to wait for apiserver process to appear ...
	I1216 21:00:27.421361   60215 api_server.go:88] waiting for apiserver healthz status ...
	I1216 21:00:27.421380   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:00:27.421869   60215 api_server.go:269] stopped: https://192.168.61.151:8443/healthz: Get "https://192.168.61.151:8443/healthz": dial tcp 192.168.61.151:8443: connect: connection refused
	I1216 21:00:27.921493   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:00:26.471175   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:28.819200   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:30.365380   60215 api_server.go:279] https://192.168.61.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 21:00:30.365410   60215 api_server.go:103] status: https://192.168.61.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 21:00:30.365425   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:00:30.416044   60215 api_server.go:279] https://192.168.61.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 21:00:30.416078   60215 api_server.go:103] status: https://192.168.61.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 21:00:30.422219   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:00:30.432135   60215 api_server.go:279] https://192.168.61.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 21:00:30.432161   60215 api_server.go:103] status: https://192.168.61.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 21:00:30.921790   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:00:30.929160   60215 api_server.go:279] https://192.168.61.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 21:00:30.929192   60215 api_server.go:103] status: https://192.168.61.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 21:00:31.421708   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:00:31.432805   60215 api_server.go:279] https://192.168.61.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 21:00:31.432839   60215 api_server.go:103] status: https://192.168.61.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 21:00:31.922000   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:00:31.933658   60215 api_server.go:279] https://192.168.61.151:8443/healthz returned 200:
	ok
	I1216 21:00:31.945496   60215 api_server.go:141] control plane version: v1.32.0
	I1216 21:00:31.945534   60215 api_server.go:131] duration metric: took 4.524165612s to wait for apiserver health ...
	I1216 21:00:31.945546   60215 cni.go:84] Creating CNI manager for ""
	I1216 21:00:31.945555   60215 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:00:31.947456   60215 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 21:00:28.954572   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:30.955397   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:29.816510   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:30.315756   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:30.815774   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:31.316516   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:31.816503   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:32.316499   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:32.816455   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:33.316478   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:33.816363   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:34.316057   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:31.948727   60215 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 21:00:31.977877   60215 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 21:00:32.014745   60215 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 21:00:32.027268   60215 system_pods.go:59] 8 kube-system pods found
	I1216 21:00:32.027303   60215 system_pods.go:61] "coredns-668d6bf9bc-rp29f" [0135dcef-2324-49ec-b459-f34b73efd82b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:00:32.027311   60215 system_pods.go:61] "etcd-embed-certs-606219" [05f01ef3-5d92-4d16-9643-0f56df3869f6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 21:00:32.027320   60215 system_pods.go:61] "kube-apiserver-embed-certs-606219" [4294c469-e47a-4722-a620-92c33d23b41e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 21:00:32.027326   60215 system_pods.go:61] "kube-controller-manager-embed-certs-606219" [cc8452e6-ca00-44dd-8d77-897df20d37f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 21:00:32.027354   60215 system_pods.go:61] "kube-proxy-8t495" [492be5cc-7d3a-4983-9bc7-14091bef7b43] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 21:00:32.027362   60215 system_pods.go:61] "kube-scheduler-embed-certs-606219" [63c42d73-a17a-4b37-a585-f7db5923c493] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 21:00:32.027376   60215 system_pods.go:61] "metrics-server-f79f97bbb-d6gmd" [50916d48-ee33-4e96-9507-c486d8ac7f7d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:00:32.027387   60215 system_pods.go:61] "storage-provisioner" [1164651f-c3b5-445f-882a-60eb2f2fb3f8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 21:00:32.027399   60215 system_pods.go:74] duration metric: took 12.633182ms to wait for pod list to return data ...
	I1216 21:00:32.027409   60215 node_conditions.go:102] verifying NodePressure condition ...
	I1216 21:00:32.041648   60215 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 21:00:32.041677   60215 node_conditions.go:123] node cpu capacity is 2
	I1216 21:00:32.041686   60215 node_conditions.go:105] duration metric: took 14.27317ms to run NodePressure ...
	I1216 21:00:32.041704   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:32.492472   60215 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1216 21:00:32.504237   60215 kubeadm.go:739] kubelet initialised
	I1216 21:00:32.504271   60215 kubeadm.go:740] duration metric: took 11.772175ms waiting for restarted kubelet to initialise ...
	I1216 21:00:32.504282   60215 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:00:32.525531   60215 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:34.531954   60215 pod_ready.go:103] pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:31.321998   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:33.325288   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:32.959143   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:35.454928   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:37.455474   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:34.815839   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:35.316503   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:35.816590   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:36.316231   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:36.816011   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:37.316485   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:37.816494   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:38.316486   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:38.816475   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:39.315762   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:36.534516   60215 pod_ready.go:103] pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:39.032255   60215 pod_ready.go:103] pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:35.819575   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:38.322139   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:40.322804   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:39.456089   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:41.955128   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:39.816009   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:40.316444   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:40.816493   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:41.315869   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:41.816495   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:42.316034   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:42.816422   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:43.316432   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:43.815875   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:44.316036   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:41.032545   60215 pod_ready.go:103] pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:43.534471   60215 pod_ready.go:103] pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:42.819610   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:44.820561   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:43.955190   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:46.455540   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:44.816293   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:45.316458   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:45.815992   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:46.316054   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:46.816449   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:47.316113   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:47.816514   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:48.316353   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:48.816144   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:49.316435   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:45.031682   60215 pod_ready.go:93] pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:45.031705   60215 pod_ready.go:82] duration metric: took 12.506146086s for pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.031715   60215 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.038109   60215 pod_ready.go:93] pod "etcd-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:45.038138   60215 pod_ready.go:82] duration metric: took 6.416609ms for pod "etcd-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.038149   60215 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.043764   60215 pod_ready.go:93] pod "kube-apiserver-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:45.043784   60215 pod_ready.go:82] duration metric: took 5.621982ms for pod "kube-apiserver-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.043793   60215 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.053376   60215 pod_ready.go:93] pod "kube-controller-manager-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:45.053399   60215 pod_ready.go:82] duration metric: took 9.600142ms for pod "kube-controller-manager-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.053409   60215 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-8t495" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.058956   60215 pod_ready.go:93] pod "kube-proxy-8t495" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:45.058976   60215 pod_ready.go:82] duration metric: took 5.561188ms for pod "kube-proxy-8t495" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.058984   60215 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.429908   60215 pod_ready.go:93] pod "kube-scheduler-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:45.429932   60215 pod_ready.go:82] duration metric: took 370.942192ms for pod "kube-scheduler-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.429942   60215 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:47.438759   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:47.323605   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:49.819763   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:48.456270   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:50.955190   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:49.815935   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:50.316437   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:50.816335   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:51.315747   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:51.816504   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:52.315695   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:52.816115   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:53.316498   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:53.816529   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:54.315689   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:49.935961   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:51.937245   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:53.937302   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:51.820266   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:53.820748   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:52.956645   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:55.456064   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:54.816019   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:55.316484   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:55.816517   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:56.315858   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:56.816306   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:57.316447   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:57.815879   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:58.316493   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:58.816395   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:59.316225   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:56.437390   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:58.938617   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:56.323619   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:58.820330   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:57.956401   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:00.456844   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:02.457677   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:59.816440   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:00.315769   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:00.816285   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:01.316020   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:01.818175   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:02.315780   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:02.816411   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:03.315758   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:03.815810   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:04.316731   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:01.436856   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:03.436945   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:00.820484   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:03.323328   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:04.955714   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:07.455361   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:04.816470   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:05.316528   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:05.815792   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:06.316491   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:06.815977   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:07.316002   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:07.816043   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:07.816114   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:07.861866   60933 cri.go:89] found id: ""
	I1216 21:01:07.861896   60933 logs.go:282] 0 containers: []
	W1216 21:01:07.861906   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:07.861913   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:07.861978   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:07.905674   60933 cri.go:89] found id: ""
	I1216 21:01:07.905700   60933 logs.go:282] 0 containers: []
	W1216 21:01:07.905707   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:07.905713   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:07.905798   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:07.949936   60933 cri.go:89] found id: ""
	I1216 21:01:07.949966   60933 logs.go:282] 0 containers: []
	W1216 21:01:07.949977   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:07.949984   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:07.950048   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:07.987196   60933 cri.go:89] found id: ""
	I1216 21:01:07.987223   60933 logs.go:282] 0 containers: []
	W1216 21:01:07.987232   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:07.987237   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:07.987341   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:08.033126   60933 cri.go:89] found id: ""
	I1216 21:01:08.033156   60933 logs.go:282] 0 containers: []
	W1216 21:01:08.033168   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:08.033176   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:08.033252   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:08.072223   60933 cri.go:89] found id: ""
	I1216 21:01:08.072257   60933 logs.go:282] 0 containers: []
	W1216 21:01:08.072270   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:08.072278   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:08.072345   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:08.117257   60933 cri.go:89] found id: ""
	I1216 21:01:08.117288   60933 logs.go:282] 0 containers: []
	W1216 21:01:08.117299   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:08.117319   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:08.117389   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:08.158059   60933 cri.go:89] found id: ""
	I1216 21:01:08.158096   60933 logs.go:282] 0 containers: []
	W1216 21:01:08.158106   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:08.158119   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:08.158133   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:08.232930   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:08.232966   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:08.277173   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:08.277204   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:08.331763   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:08.331802   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:08.346150   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:08.346178   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:08.488668   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:05.437627   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:07.938294   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:05.820491   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:07.821058   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:10.322630   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:09.456101   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:11.461923   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:10.989383   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:11.003162   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:11.003266   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:11.040432   60933 cri.go:89] found id: ""
	I1216 21:01:11.040464   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.040475   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:11.040483   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:11.040547   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:11.083083   60933 cri.go:89] found id: ""
	I1216 21:01:11.083110   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.083117   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:11.083122   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:11.083182   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:11.122842   60933 cri.go:89] found id: ""
	I1216 21:01:11.122880   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.122893   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:11.122900   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:11.122969   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:11.168227   60933 cri.go:89] found id: ""
	I1216 21:01:11.168268   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.168279   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:11.168286   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:11.168359   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:11.218660   60933 cri.go:89] found id: ""
	I1216 21:01:11.218689   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.218701   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:11.218708   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:11.218774   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:11.281179   60933 cri.go:89] found id: ""
	I1216 21:01:11.281214   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.281227   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:11.281236   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:11.281315   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:11.326419   60933 cri.go:89] found id: ""
	I1216 21:01:11.326453   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.326464   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:11.326472   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:11.326535   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:11.368825   60933 cri.go:89] found id: ""
	I1216 21:01:11.368863   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.368875   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:11.368887   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:11.368905   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:11.454848   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:11.454869   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:11.454888   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:11.541685   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:11.541724   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:11.581804   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:11.581830   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:11.635800   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:11.635838   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:14.152441   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:14.167637   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:14.167720   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:14.206685   60933 cri.go:89] found id: ""
	I1216 21:01:14.206716   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.206728   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:14.206735   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:14.206796   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:14.248126   60933 cri.go:89] found id: ""
	I1216 21:01:14.248151   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.248159   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:14.248165   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:14.248215   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:14.285030   60933 cri.go:89] found id: ""
	I1216 21:01:14.285067   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.285079   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:14.285086   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:14.285151   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:14.325706   60933 cri.go:89] found id: ""
	I1216 21:01:14.325736   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.325747   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:14.325755   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:14.325820   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:14.369447   60933 cri.go:89] found id: ""
	I1216 21:01:14.369475   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.369486   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:14.369494   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:14.369557   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:10.437872   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:12.937013   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:12.820480   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:15.319910   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:13.959919   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:16.458101   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:14.407792   60933 cri.go:89] found id: ""
	I1216 21:01:14.407818   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.407826   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:14.407832   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:14.407890   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:14.448380   60933 cri.go:89] found id: ""
	I1216 21:01:14.448411   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.448419   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:14.448424   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:14.448473   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:14.487116   60933 cri.go:89] found id: ""
	I1216 21:01:14.487144   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.487154   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:14.487164   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:14.487177   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:14.547342   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:14.547390   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:14.563385   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:14.563424   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:14.637363   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:14.637394   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:14.637410   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:14.715586   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:14.715626   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:17.258974   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:17.273896   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:17.273970   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:17.317359   60933 cri.go:89] found id: ""
	I1216 21:01:17.317394   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.317405   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:17.317412   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:17.317476   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:17.361422   60933 cri.go:89] found id: ""
	I1216 21:01:17.361451   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.361462   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:17.361469   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:17.361568   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:17.401466   60933 cri.go:89] found id: ""
	I1216 21:01:17.401522   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.401534   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:17.401544   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:17.401614   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:17.439560   60933 cri.go:89] found id: ""
	I1216 21:01:17.439588   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.439597   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:17.439603   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:17.439655   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:17.480310   60933 cri.go:89] found id: ""
	I1216 21:01:17.480333   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.480340   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:17.480345   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:17.480393   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:17.528562   60933 cri.go:89] found id: ""
	I1216 21:01:17.528589   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.528600   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:17.528607   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:17.528671   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:17.569863   60933 cri.go:89] found id: ""
	I1216 21:01:17.569900   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.569908   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:17.569914   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:17.569975   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:17.610840   60933 cri.go:89] found id: ""
	I1216 21:01:17.610867   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.610875   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:17.610884   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:17.610895   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:17.661002   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:17.661041   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:17.675290   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:17.675318   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:17.743550   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:17.743572   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:17.743584   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:17.824479   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:17.824524   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:15.437260   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:17.937487   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:17.324337   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:19.819325   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:18.956605   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:20.957030   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:20.373687   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:20.389149   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:20.389244   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:20.429594   60933 cri.go:89] found id: ""
	I1216 21:01:20.429626   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.429634   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:20.429639   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:20.429693   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:20.473157   60933 cri.go:89] found id: ""
	I1216 21:01:20.473185   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.473193   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:20.473198   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:20.473264   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:20.512549   60933 cri.go:89] found id: ""
	I1216 21:01:20.512586   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.512597   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:20.512604   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:20.512676   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:20.549275   60933 cri.go:89] found id: ""
	I1216 21:01:20.549310   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.549323   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:20.549344   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:20.549408   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:20.587405   60933 cri.go:89] found id: ""
	I1216 21:01:20.587435   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.587443   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:20.587449   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:20.587515   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:20.625364   60933 cri.go:89] found id: ""
	I1216 21:01:20.625393   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.625400   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:20.625406   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:20.625456   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:20.664018   60933 cri.go:89] found id: ""
	I1216 21:01:20.664050   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.664061   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:20.664068   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:20.664117   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:20.703860   60933 cri.go:89] found id: ""
	I1216 21:01:20.703890   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.703898   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:20.703906   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:20.703918   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:20.754433   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:20.754470   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:20.770136   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:20.770172   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:20.854025   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:20.854049   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:20.854061   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:20.939628   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:20.939661   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:23.489645   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:23.503603   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:23.503667   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:23.543044   60933 cri.go:89] found id: ""
	I1216 21:01:23.543070   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.543077   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:23.543083   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:23.543131   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:23.580333   60933 cri.go:89] found id: ""
	I1216 21:01:23.580362   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.580371   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:23.580377   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:23.580428   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:23.616732   60933 cri.go:89] found id: ""
	I1216 21:01:23.616766   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.616778   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:23.616785   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:23.616834   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:23.655771   60933 cri.go:89] found id: ""
	I1216 21:01:23.655793   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.655801   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:23.655807   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:23.655861   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:23.694400   60933 cri.go:89] found id: ""
	I1216 21:01:23.694430   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.694437   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:23.694443   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:23.694500   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:23.732592   60933 cri.go:89] found id: ""
	I1216 21:01:23.732622   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.732630   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:23.732636   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:23.732688   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:23.769752   60933 cri.go:89] found id: ""
	I1216 21:01:23.769787   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.769801   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:23.769810   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:23.769892   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:23.806891   60933 cri.go:89] found id: ""
	I1216 21:01:23.806925   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.806936   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:23.806947   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:23.806963   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:23.822887   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:23.822912   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:23.898795   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:23.898817   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:23.898830   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:23.978036   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:23.978073   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:24.032500   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:24.032528   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:20.437888   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:22.936895   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:21.819859   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:23.820383   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:23.456331   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:25.960513   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:26.585937   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:26.599470   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:26.599543   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:26.635421   60933 cri.go:89] found id: ""
	I1216 21:01:26.635446   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.635455   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:26.635461   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:26.635527   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:26.675347   60933 cri.go:89] found id: ""
	I1216 21:01:26.675379   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.675390   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:26.675397   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:26.675464   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:26.715444   60933 cri.go:89] found id: ""
	I1216 21:01:26.715469   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.715480   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:26.715541   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:26.715619   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:26.753841   60933 cri.go:89] found id: ""
	I1216 21:01:26.753874   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.753893   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:26.753901   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:26.753963   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:26.791427   60933 cri.go:89] found id: ""
	I1216 21:01:26.791453   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.791464   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:26.791473   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:26.791539   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:26.832772   60933 cri.go:89] found id: ""
	I1216 21:01:26.832804   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.832816   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:26.832823   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:26.832887   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:26.869963   60933 cri.go:89] found id: ""
	I1216 21:01:26.869990   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.869997   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:26.870003   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:26.870068   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:26.906792   60933 cri.go:89] found id: ""
	I1216 21:01:26.906821   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.906862   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:26.906875   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:26.906894   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:26.994820   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:26.994863   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:27.034642   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:27.034686   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:27.089128   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:27.089168   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:27.104368   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:27.104401   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:27.179852   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:25.436696   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:27.937229   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:26.319568   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:28.820132   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:28.454880   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:30.455734   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:29.681052   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:29.695376   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:29.695464   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:29.735562   60933 cri.go:89] found id: ""
	I1216 21:01:29.735588   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.735596   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:29.735602   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:29.735650   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:29.772635   60933 cri.go:89] found id: ""
	I1216 21:01:29.772663   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.772672   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:29.772678   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:29.772737   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:29.810471   60933 cri.go:89] found id: ""
	I1216 21:01:29.810499   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.810509   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:29.810516   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:29.810575   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:29.845917   60933 cri.go:89] found id: ""
	I1216 21:01:29.845952   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.845966   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:29.845975   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:29.846048   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:29.883866   60933 cri.go:89] found id: ""
	I1216 21:01:29.883892   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.883900   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:29.883906   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:29.883968   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:29.920696   60933 cri.go:89] found id: ""
	I1216 21:01:29.920729   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.920740   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:29.920748   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:29.920831   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:29.957977   60933 cri.go:89] found id: ""
	I1216 21:01:29.958056   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.958069   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:29.958079   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:29.958144   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:29.995436   60933 cri.go:89] found id: ""
	I1216 21:01:29.995464   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.995472   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:29.995481   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:29.995492   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:30.046819   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:30.046859   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:30.062754   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:30.062807   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:30.138932   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:30.138959   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:30.138975   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:30.225720   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:30.225768   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:32.768185   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:32.782642   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:32.782729   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:32.821995   60933 cri.go:89] found id: ""
	I1216 21:01:32.822029   60933 logs.go:282] 0 containers: []
	W1216 21:01:32.822040   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:32.822048   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:32.822112   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:32.858453   60933 cri.go:89] found id: ""
	I1216 21:01:32.858487   60933 logs.go:282] 0 containers: []
	W1216 21:01:32.858497   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:32.858504   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:32.858570   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:32.896269   60933 cri.go:89] found id: ""
	I1216 21:01:32.896304   60933 logs.go:282] 0 containers: []
	W1216 21:01:32.896316   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:32.896323   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:32.896384   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:32.936795   60933 cri.go:89] found id: ""
	I1216 21:01:32.936820   60933 logs.go:282] 0 containers: []
	W1216 21:01:32.936832   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:32.936838   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:32.936904   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:32.974779   60933 cri.go:89] found id: ""
	I1216 21:01:32.974810   60933 logs.go:282] 0 containers: []
	W1216 21:01:32.974821   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:32.974828   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:32.974892   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:33.012201   60933 cri.go:89] found id: ""
	I1216 21:01:33.012226   60933 logs.go:282] 0 containers: []
	W1216 21:01:33.012234   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:33.012239   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:33.012287   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:33.049777   60933 cri.go:89] found id: ""
	I1216 21:01:33.049803   60933 logs.go:282] 0 containers: []
	W1216 21:01:33.049811   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:33.049816   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:33.049873   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:33.087820   60933 cri.go:89] found id: ""
	I1216 21:01:33.087851   60933 logs.go:282] 0 containers: []
	W1216 21:01:33.087859   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:33.087870   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:33.087885   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:33.140816   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:33.140854   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:33.154817   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:33.154855   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:33.231445   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:33.231474   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:33.231496   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:33.311547   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:33.311586   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:29.938045   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:32.436934   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:34.444209   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:31.321180   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:33.324091   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:32.956028   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:35.454994   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:37.455094   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:35.855686   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:35.870404   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:35.870485   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:35.908175   60933 cri.go:89] found id: ""
	I1216 21:01:35.908204   60933 logs.go:282] 0 containers: []
	W1216 21:01:35.908215   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:35.908222   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:35.908284   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:35.955456   60933 cri.go:89] found id: ""
	I1216 21:01:35.955483   60933 logs.go:282] 0 containers: []
	W1216 21:01:35.955494   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:35.955501   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:35.955562   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:35.995170   60933 cri.go:89] found id: ""
	I1216 21:01:35.995201   60933 logs.go:282] 0 containers: []
	W1216 21:01:35.995211   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:35.995218   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:35.995305   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:36.033729   60933 cri.go:89] found id: ""
	I1216 21:01:36.033758   60933 logs.go:282] 0 containers: []
	W1216 21:01:36.033769   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:36.033776   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:36.033840   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:36.072756   60933 cri.go:89] found id: ""
	I1216 21:01:36.072787   60933 logs.go:282] 0 containers: []
	W1216 21:01:36.072799   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:36.072806   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:36.072873   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:36.112149   60933 cri.go:89] found id: ""
	I1216 21:01:36.112187   60933 logs.go:282] 0 containers: []
	W1216 21:01:36.112198   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:36.112205   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:36.112258   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:36.148742   60933 cri.go:89] found id: ""
	I1216 21:01:36.148770   60933 logs.go:282] 0 containers: []
	W1216 21:01:36.148781   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:36.148789   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:36.148855   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:36.192827   60933 cri.go:89] found id: ""
	I1216 21:01:36.192864   60933 logs.go:282] 0 containers: []
	W1216 21:01:36.192875   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:36.192886   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:36.192901   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:36.243822   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:36.243867   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:36.258258   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:36.258292   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:36.342847   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:36.342876   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:36.342891   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:36.424741   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:36.424780   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:38.967334   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:38.982208   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:38.982283   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:39.023903   60933 cri.go:89] found id: ""
	I1216 21:01:39.023931   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.023939   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:39.023945   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:39.023997   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:39.070314   60933 cri.go:89] found id: ""
	I1216 21:01:39.070342   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.070351   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:39.070359   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:39.070423   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:39.115081   60933 cri.go:89] found id: ""
	I1216 21:01:39.115106   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.115113   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:39.115119   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:39.115178   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:39.151933   60933 cri.go:89] found id: ""
	I1216 21:01:39.151959   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.151967   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:39.151972   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:39.152022   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:39.192280   60933 cri.go:89] found id: ""
	I1216 21:01:39.192307   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.192315   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:39.192322   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:39.192370   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:39.228792   60933 cri.go:89] found id: ""
	I1216 21:01:39.228814   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.228822   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:39.228827   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:39.228887   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:39.266823   60933 cri.go:89] found id: ""
	I1216 21:01:39.266847   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.266854   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:39.266860   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:39.266908   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:39.301317   60933 cri.go:89] found id: ""
	I1216 21:01:39.301340   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.301348   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:39.301361   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:39.301372   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:39.386615   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:39.386663   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:36.936376   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:38.936968   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:35.820025   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:37.820396   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:40.319915   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:39.457790   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:41.955758   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:39.433079   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:39.433112   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:39.489422   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:39.489458   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:39.504223   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:39.504259   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:39.587898   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:42.088900   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:42.103768   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:42.103854   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:42.141956   60933 cri.go:89] found id: ""
	I1216 21:01:42.142026   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.142040   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:42.142049   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:42.142117   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:42.178754   60933 cri.go:89] found id: ""
	I1216 21:01:42.178782   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.178818   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:42.178833   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:42.178891   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:42.215872   60933 cri.go:89] found id: ""
	I1216 21:01:42.215905   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.215916   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:42.215923   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:42.215991   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:42.253854   60933 cri.go:89] found id: ""
	I1216 21:01:42.253885   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.253896   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:42.253904   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:42.253972   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:42.290963   60933 cri.go:89] found id: ""
	I1216 21:01:42.291008   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.291023   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:42.291039   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:42.291109   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:42.332920   60933 cri.go:89] found id: ""
	I1216 21:01:42.332946   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.332953   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:42.332959   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:42.333006   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:42.375060   60933 cri.go:89] found id: ""
	I1216 21:01:42.375093   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.375104   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:42.375112   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:42.375189   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:42.416593   60933 cri.go:89] found id: ""
	I1216 21:01:42.416621   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.416631   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:42.416639   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:42.416651   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:42.475204   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:42.475260   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:42.491022   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:42.491057   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:42.566645   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:42.566672   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:42.566687   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:42.646815   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:42.646856   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:41.436872   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:43.936734   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:42.321709   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:44.321985   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:43.955807   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:46.455508   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:45.191912   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:45.205487   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:45.205548   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:45.245350   60933 cri.go:89] found id: ""
	I1216 21:01:45.245389   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.245397   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:45.245404   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:45.245482   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:45.302126   60933 cri.go:89] found id: ""
	I1216 21:01:45.302158   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.302171   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:45.302178   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:45.302251   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:45.342888   60933 cri.go:89] found id: ""
	I1216 21:01:45.342917   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.342932   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:45.342937   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:45.342990   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:45.381545   60933 cri.go:89] found id: ""
	I1216 21:01:45.381574   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.381585   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:45.381593   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:45.381652   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:45.418081   60933 cri.go:89] found id: ""
	I1216 21:01:45.418118   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.418131   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:45.418138   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:45.418207   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:45.458610   60933 cri.go:89] found id: ""
	I1216 21:01:45.458637   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.458647   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:45.458655   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:45.458713   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:45.500102   60933 cri.go:89] found id: ""
	I1216 21:01:45.500137   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.500148   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:45.500155   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:45.500217   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:45.542074   60933 cri.go:89] found id: ""
	I1216 21:01:45.542103   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.542113   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:45.542122   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:45.542134   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:45.597577   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:45.597614   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:45.614028   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:45.614075   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:45.693014   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:45.693039   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:45.693056   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:45.772260   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:45.772295   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:48.317073   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:48.332176   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:48.332242   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:48.369946   60933 cri.go:89] found id: ""
	I1216 21:01:48.369976   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.369988   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:48.369994   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:48.370059   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:48.407628   60933 cri.go:89] found id: ""
	I1216 21:01:48.407661   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.407672   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:48.407680   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:48.407742   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:48.444377   60933 cri.go:89] found id: ""
	I1216 21:01:48.444403   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.444411   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:48.444416   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:48.444467   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:48.485674   60933 cri.go:89] found id: ""
	I1216 21:01:48.485710   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.485722   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:48.485730   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:48.485785   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:48.530577   60933 cri.go:89] found id: ""
	I1216 21:01:48.530610   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.530621   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:48.530628   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:48.530693   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:48.567128   60933 cri.go:89] found id: ""
	I1216 21:01:48.567151   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.567159   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:48.567165   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:48.567216   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:48.603294   60933 cri.go:89] found id: ""
	I1216 21:01:48.603320   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.603327   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:48.603333   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:48.603392   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:48.646221   60933 cri.go:89] found id: ""
	I1216 21:01:48.646253   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.646265   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:48.646288   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:48.646318   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:48.697589   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:48.697624   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:48.711916   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:48.711947   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:48.789068   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:48.789097   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:48.789113   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:48.872340   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:48.872378   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:45.937806   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:48.437160   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:46.819986   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:48.821079   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:48.456975   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:50.956101   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:51.418176   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:51.434851   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:51.434948   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:51.478935   60933 cri.go:89] found id: ""
	I1216 21:01:51.478963   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.478975   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:51.478982   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:51.479043   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:51.524581   60933 cri.go:89] found id: ""
	I1216 21:01:51.524611   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.524622   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:51.524629   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:51.524686   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:51.563479   60933 cri.go:89] found id: ""
	I1216 21:01:51.563507   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.563516   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:51.563521   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:51.563578   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:51.601931   60933 cri.go:89] found id: ""
	I1216 21:01:51.601964   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.601975   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:51.601982   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:51.602044   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:51.638984   60933 cri.go:89] found id: ""
	I1216 21:01:51.639014   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.639025   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:51.639032   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:51.639093   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:51.681137   60933 cri.go:89] found id: ""
	I1216 21:01:51.681167   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.681178   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:51.681185   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:51.681263   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:51.722904   60933 cri.go:89] found id: ""
	I1216 21:01:51.722932   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.722941   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:51.722946   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:51.722994   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:51.794403   60933 cri.go:89] found id: ""
	I1216 21:01:51.794434   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.794444   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:51.794453   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:51.794464   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:51.850688   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:51.850724   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:51.866049   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:51.866079   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:51.949844   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:51.949880   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:51.949894   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:52.028981   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:52.029023   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:50.936202   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:52.936839   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:51.321959   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:53.819864   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:53.455360   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:55.954957   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:54.570192   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:54.585405   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:54.585489   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:54.627670   60933 cri.go:89] found id: ""
	I1216 21:01:54.627701   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.627712   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:54.627719   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:54.627782   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:54.671226   60933 cri.go:89] found id: ""
	I1216 21:01:54.671265   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.671276   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:54.671283   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:54.671337   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:54.705549   60933 cri.go:89] found id: ""
	I1216 21:01:54.705581   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.705592   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:54.705600   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:54.705663   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:54.743638   60933 cri.go:89] found id: ""
	I1216 21:01:54.743664   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.743671   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:54.743677   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:54.743728   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:54.781714   60933 cri.go:89] found id: ""
	I1216 21:01:54.781750   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.781760   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:54.781767   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:54.781831   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:54.830808   60933 cri.go:89] found id: ""
	I1216 21:01:54.830840   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.830851   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:54.830859   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:54.830923   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:54.868539   60933 cri.go:89] found id: ""
	I1216 21:01:54.868565   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.868573   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:54.868578   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:54.868626   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:54.906554   60933 cri.go:89] found id: ""
	I1216 21:01:54.906587   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.906595   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:54.906604   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:54.906617   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:54.960664   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:54.960696   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:54.975657   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:54.975686   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:55.052266   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:55.052293   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:55.052320   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:55.137894   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:55.137937   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:57.682769   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:57.699102   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:57.699184   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:57.764651   60933 cri.go:89] found id: ""
	I1216 21:01:57.764684   60933 logs.go:282] 0 containers: []
	W1216 21:01:57.764692   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:57.764698   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:57.764755   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:57.805358   60933 cri.go:89] found id: ""
	I1216 21:01:57.805385   60933 logs.go:282] 0 containers: []
	W1216 21:01:57.805395   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:57.805404   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:57.805474   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:57.843589   60933 cri.go:89] found id: ""
	I1216 21:01:57.843623   60933 logs.go:282] 0 containers: []
	W1216 21:01:57.843634   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:57.843644   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:57.843716   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:57.881725   60933 cri.go:89] found id: ""
	I1216 21:01:57.881748   60933 logs.go:282] 0 containers: []
	W1216 21:01:57.881756   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:57.881761   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:57.881811   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:57.922252   60933 cri.go:89] found id: ""
	I1216 21:01:57.922293   60933 logs.go:282] 0 containers: []
	W1216 21:01:57.922305   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:57.922322   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:57.922385   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:57.962532   60933 cri.go:89] found id: ""
	I1216 21:01:57.962555   60933 logs.go:282] 0 containers: []
	W1216 21:01:57.962562   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:57.962567   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:57.962615   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:58.002021   60933 cri.go:89] found id: ""
	I1216 21:01:58.002056   60933 logs.go:282] 0 containers: []
	W1216 21:01:58.002067   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:58.002074   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:58.002137   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:58.035648   60933 cri.go:89] found id: ""
	I1216 21:01:58.035672   60933 logs.go:282] 0 containers: []
	W1216 21:01:58.035680   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:58.035688   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:58.035699   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:58.116142   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:58.116177   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:58.157683   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:58.157717   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:58.211686   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:58.211722   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:58.226385   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:58.226409   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:58.302287   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:54.937208   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:57.437396   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:59.438489   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:56.326836   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:58.818671   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:57.955980   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:00.455212   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:00.802544   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:00.816325   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:00.816405   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:00.853031   60933 cri.go:89] found id: ""
	I1216 21:02:00.853057   60933 logs.go:282] 0 containers: []
	W1216 21:02:00.853065   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:00.853070   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:00.853122   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:00.891040   60933 cri.go:89] found id: ""
	I1216 21:02:00.891071   60933 logs.go:282] 0 containers: []
	W1216 21:02:00.891082   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:00.891089   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:00.891151   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:00.929145   60933 cri.go:89] found id: ""
	I1216 21:02:00.929168   60933 logs.go:282] 0 containers: []
	W1216 21:02:00.929175   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:00.929181   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:00.929227   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:00.976469   60933 cri.go:89] found id: ""
	I1216 21:02:00.976492   60933 logs.go:282] 0 containers: []
	W1216 21:02:00.976500   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:00.976505   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:00.976553   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:01.015053   60933 cri.go:89] found id: ""
	I1216 21:02:01.015078   60933 logs.go:282] 0 containers: []
	W1216 21:02:01.015086   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:01.015092   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:01.015150   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:01.052859   60933 cri.go:89] found id: ""
	I1216 21:02:01.052891   60933 logs.go:282] 0 containers: []
	W1216 21:02:01.052902   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:01.052909   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:01.053028   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:01.091209   60933 cri.go:89] found id: ""
	I1216 21:02:01.091238   60933 logs.go:282] 0 containers: []
	W1216 21:02:01.091259   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:01.091266   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:01.091341   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:01.127013   60933 cri.go:89] found id: ""
	I1216 21:02:01.127038   60933 logs.go:282] 0 containers: []
	W1216 21:02:01.127047   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:01.127058   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:01.127072   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:01.179642   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:01.179697   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:01.196390   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:01.196416   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:01.275446   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:01.275478   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:01.275493   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:01.354391   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:01.354429   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:03.897672   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:03.911596   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:03.911654   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:03.955700   60933 cri.go:89] found id: ""
	I1216 21:02:03.955726   60933 logs.go:282] 0 containers: []
	W1216 21:02:03.955735   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:03.955741   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:03.955803   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:03.995661   60933 cri.go:89] found id: ""
	I1216 21:02:03.995696   60933 logs.go:282] 0 containers: []
	W1216 21:02:03.995706   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:03.995713   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:03.995772   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:04.031368   60933 cri.go:89] found id: ""
	I1216 21:02:04.031391   60933 logs.go:282] 0 containers: []
	W1216 21:02:04.031398   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:04.031406   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:04.031455   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:04.067633   60933 cri.go:89] found id: ""
	I1216 21:02:04.067659   60933 logs.go:282] 0 containers: []
	W1216 21:02:04.067666   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:04.067671   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:04.067719   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:04.105734   60933 cri.go:89] found id: ""
	I1216 21:02:04.105758   60933 logs.go:282] 0 containers: []
	W1216 21:02:04.105768   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:04.105773   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:04.105824   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:04.146542   60933 cri.go:89] found id: ""
	I1216 21:02:04.146564   60933 logs.go:282] 0 containers: []
	W1216 21:02:04.146571   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:04.146577   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:04.146623   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:04.184433   60933 cri.go:89] found id: ""
	I1216 21:02:04.184462   60933 logs.go:282] 0 containers: []
	W1216 21:02:04.184473   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:04.184480   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:04.184551   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:04.223077   60933 cri.go:89] found id: ""
	I1216 21:02:04.223106   60933 logs.go:282] 0 containers: []
	W1216 21:02:04.223117   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:04.223127   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:04.223140   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:04.279618   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:04.279656   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:04.295841   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:04.295865   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:04.372609   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:04.372632   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:04.372648   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:01.937175   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:03.937249   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:00.819801   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:02.820563   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:05.320087   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:02.955461   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:05.455023   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:07.456981   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:04.457597   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:04.457631   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:07.006004   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:07.020394   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:07.020537   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:07.064242   60933 cri.go:89] found id: ""
	I1216 21:02:07.064274   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.064283   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:07.064289   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:07.064337   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:07.108865   60933 cri.go:89] found id: ""
	I1216 21:02:07.108899   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.108910   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:07.108917   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:07.108985   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:07.149021   60933 cri.go:89] found id: ""
	I1216 21:02:07.149051   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.149060   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:07.149066   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:07.149120   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:07.187808   60933 cri.go:89] found id: ""
	I1216 21:02:07.187833   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.187843   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:07.187850   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:07.187912   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:07.228748   60933 cri.go:89] found id: ""
	I1216 21:02:07.228774   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.228785   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:07.228792   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:07.228853   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:07.267961   60933 cri.go:89] found id: ""
	I1216 21:02:07.267996   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.268012   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:07.268021   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:07.268099   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:07.312464   60933 cri.go:89] found id: ""
	I1216 21:02:07.312491   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.312498   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:07.312503   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:07.312554   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:07.351902   60933 cri.go:89] found id: ""
	I1216 21:02:07.351933   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.351946   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:07.351958   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:07.351974   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:07.405985   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:07.406050   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:07.420796   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:07.420842   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:07.506527   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:07.506559   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:07.506574   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:07.587965   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:07.588001   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:06.437434   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:08.937843   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:07.320229   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:09.819940   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:09.954900   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:11.955004   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:10.132876   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:10.146785   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:10.146858   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:10.189278   60933 cri.go:89] found id: ""
	I1216 21:02:10.189312   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.189324   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:10.189332   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:10.189402   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:10.228331   60933 cri.go:89] found id: ""
	I1216 21:02:10.228370   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.228378   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:10.228383   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:10.228436   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:10.266424   60933 cri.go:89] found id: ""
	I1216 21:02:10.266458   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.266470   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:10.266478   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:10.266542   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:10.305865   60933 cri.go:89] found id: ""
	I1216 21:02:10.305890   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.305902   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:10.305909   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:10.305968   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:10.344211   60933 cri.go:89] found id: ""
	I1216 21:02:10.344239   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.344247   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:10.344253   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:10.344314   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:10.381939   60933 cri.go:89] found id: ""
	I1216 21:02:10.381993   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.382004   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:10.382011   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:10.382076   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:10.418882   60933 cri.go:89] found id: ""
	I1216 21:02:10.418908   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.418915   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:10.418921   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:10.418972   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:10.458397   60933 cri.go:89] found id: ""
	I1216 21:02:10.458425   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.458434   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:10.458447   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:10.458462   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:10.472152   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:10.472180   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:10.545888   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:10.545913   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:10.545926   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:10.627223   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:10.627293   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:10.676606   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:10.676633   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:13.227283   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:13.242871   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:13.242954   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:13.280676   60933 cri.go:89] found id: ""
	I1216 21:02:13.280711   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.280723   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:13.280731   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:13.280786   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:13.321357   60933 cri.go:89] found id: ""
	I1216 21:02:13.321389   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.321400   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:13.321408   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:13.321474   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:13.359002   60933 cri.go:89] found id: ""
	I1216 21:02:13.359030   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.359042   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:13.359050   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:13.359116   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:13.395879   60933 cri.go:89] found id: ""
	I1216 21:02:13.395922   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.395941   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:13.395950   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:13.396017   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:13.436761   60933 cri.go:89] found id: ""
	I1216 21:02:13.436781   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.436788   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:13.436793   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:13.436852   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:13.478839   60933 cri.go:89] found id: ""
	I1216 21:02:13.478869   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.478877   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:13.478883   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:13.478947   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:13.520013   60933 cri.go:89] found id: ""
	I1216 21:02:13.520037   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.520044   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:13.520050   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:13.520124   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:13.556973   60933 cri.go:89] found id: ""
	I1216 21:02:13.557001   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.557013   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:13.557023   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:13.557039   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:13.613499   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:13.613537   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:13.628689   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:13.628724   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:13.706556   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:13.706576   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:13.706589   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:13.786379   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:13.786419   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:11.436179   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:13.436800   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:11.820109   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:13.820778   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:14.457666   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:16.955591   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:16.333578   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:16.347948   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:16.348020   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:16.386928   60933 cri.go:89] found id: ""
	I1216 21:02:16.386955   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.386963   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:16.386969   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:16.387033   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:16.425192   60933 cri.go:89] found id: ""
	I1216 21:02:16.425253   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.425265   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:16.425273   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:16.425355   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:16.465522   60933 cri.go:89] found id: ""
	I1216 21:02:16.465554   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.465565   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:16.465573   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:16.465638   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:16.504567   60933 cri.go:89] found id: ""
	I1216 21:02:16.504605   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.504616   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:16.504624   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:16.504694   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:16.541823   60933 cri.go:89] found id: ""
	I1216 21:02:16.541852   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.541864   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:16.541872   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:16.541942   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:16.580898   60933 cri.go:89] found id: ""
	I1216 21:02:16.580927   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.580938   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:16.580946   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:16.581003   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:16.626006   60933 cri.go:89] found id: ""
	I1216 21:02:16.626036   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.626046   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:16.626053   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:16.626109   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:16.662686   60933 cri.go:89] found id: ""
	I1216 21:02:16.662712   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.662719   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:16.662728   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:16.662740   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:16.717939   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:16.717978   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:16.733431   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:16.733466   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:16.807379   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:16.807409   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:16.807421   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:16.896455   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:16.896492   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:15.437791   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:17.935778   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:16.321167   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:18.819624   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:18.955621   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:20.956220   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:19.442959   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:19.458684   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:19.458749   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:19.499907   60933 cri.go:89] found id: ""
	I1216 21:02:19.499938   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.499947   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:19.499954   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:19.500002   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:19.538010   60933 cri.go:89] found id: ""
	I1216 21:02:19.538035   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.538043   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:19.538049   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:19.538148   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:19.577097   60933 cri.go:89] found id: ""
	I1216 21:02:19.577131   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.577139   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:19.577145   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:19.577196   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:19.617288   60933 cri.go:89] found id: ""
	I1216 21:02:19.617316   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.617326   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:19.617332   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:19.617392   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:19.658066   60933 cri.go:89] found id: ""
	I1216 21:02:19.658090   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.658097   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:19.658103   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:19.658153   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:19.696077   60933 cri.go:89] found id: ""
	I1216 21:02:19.696108   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.696121   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:19.696131   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:19.696189   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:19.737657   60933 cri.go:89] found id: ""
	I1216 21:02:19.737692   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.737704   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:19.737712   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:19.737776   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:19.778699   60933 cri.go:89] found id: ""
	I1216 21:02:19.778729   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.778738   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:19.778746   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:19.778757   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:19.841941   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:19.841979   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:19.857752   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:19.857788   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:19.935980   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:19.936004   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:19.936020   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:20.019999   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:20.020046   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:22.566398   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:22.580376   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:22.580472   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:22.620240   60933 cri.go:89] found id: ""
	I1216 21:02:22.620273   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.620284   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:22.620292   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:22.620355   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:22.656413   60933 cri.go:89] found id: ""
	I1216 21:02:22.656444   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.656455   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:22.656463   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:22.656531   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:22.690956   60933 cri.go:89] found id: ""
	I1216 21:02:22.690978   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.690986   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:22.690992   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:22.691040   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:22.734851   60933 cri.go:89] found id: ""
	I1216 21:02:22.734885   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.734895   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:22.734903   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:22.734969   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:22.774416   60933 cri.go:89] found id: ""
	I1216 21:02:22.774450   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.774461   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:22.774467   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:22.774535   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:22.811162   60933 cri.go:89] found id: ""
	I1216 21:02:22.811192   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.811204   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:22.811212   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:22.811296   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:22.851955   60933 cri.go:89] found id: ""
	I1216 21:02:22.851980   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.851987   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:22.851993   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:22.852051   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:22.888699   60933 cri.go:89] found id: ""
	I1216 21:02:22.888725   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.888736   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:22.888747   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:22.888769   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:22.944065   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:22.944100   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:22.960842   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:22.960872   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:23.036229   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:23.036251   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:23.036263   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:23.122493   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:23.122535   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:19.936687   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:21.937222   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:24.437190   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:20.820544   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:22.820771   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:25.319776   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:22.956523   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:25.456180   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:25.667995   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:25.682152   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:25.682222   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:25.719092   60933 cri.go:89] found id: ""
	I1216 21:02:25.719120   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.719130   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:25.719135   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:25.719190   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:25.757668   60933 cri.go:89] found id: ""
	I1216 21:02:25.757702   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.757712   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:25.757720   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:25.757791   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:25.809743   60933 cri.go:89] found id: ""
	I1216 21:02:25.809776   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.809787   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:25.809795   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:25.809857   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:25.849181   60933 cri.go:89] found id: ""
	I1216 21:02:25.849211   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.849222   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:25.849230   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:25.849295   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:25.891032   60933 cri.go:89] found id: ""
	I1216 21:02:25.891079   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.891091   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:25.891098   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:25.891169   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:25.930549   60933 cri.go:89] found id: ""
	I1216 21:02:25.930575   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.930583   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:25.930589   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:25.930639   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:25.971709   60933 cri.go:89] found id: ""
	I1216 21:02:25.971736   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.971744   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:25.971749   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:25.971797   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:26.007728   60933 cri.go:89] found id: ""
	I1216 21:02:26.007760   60933 logs.go:282] 0 containers: []
	W1216 21:02:26.007769   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:26.007778   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:26.007791   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:26.059710   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:26.059752   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:26.074596   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:26.074627   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:26.145892   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:26.145913   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:26.145924   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:26.225961   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:26.226000   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:28.772974   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:28.787001   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:28.787078   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:28.828176   60933 cri.go:89] found id: ""
	I1216 21:02:28.828206   60933 logs.go:282] 0 containers: []
	W1216 21:02:28.828214   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:28.828223   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:28.828292   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:28.872750   60933 cri.go:89] found id: ""
	I1216 21:02:28.872781   60933 logs.go:282] 0 containers: []
	W1216 21:02:28.872792   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:28.872798   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:28.872859   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:28.914844   60933 cri.go:89] found id: ""
	I1216 21:02:28.914871   60933 logs.go:282] 0 containers: []
	W1216 21:02:28.914879   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:28.914884   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:28.914934   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:28.953541   60933 cri.go:89] found id: ""
	I1216 21:02:28.953569   60933 logs.go:282] 0 containers: []
	W1216 21:02:28.953579   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:28.953587   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:28.953647   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:28.992768   60933 cri.go:89] found id: ""
	I1216 21:02:28.992797   60933 logs.go:282] 0 containers: []
	W1216 21:02:28.992808   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:28.992816   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:28.992882   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:29.030069   60933 cri.go:89] found id: ""
	I1216 21:02:29.030102   60933 logs.go:282] 0 containers: []
	W1216 21:02:29.030113   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:29.030121   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:29.030187   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:29.068629   60933 cri.go:89] found id: ""
	I1216 21:02:29.068658   60933 logs.go:282] 0 containers: []
	W1216 21:02:29.068666   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:29.068677   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:29.068726   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:29.103664   60933 cri.go:89] found id: ""
	I1216 21:02:29.103697   60933 logs.go:282] 0 containers: []
	W1216 21:02:29.103708   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:29.103719   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:29.103732   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:29.151225   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:29.151276   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:29.209448   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:29.209499   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:29.225232   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:29.225257   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:29.309812   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:29.309832   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:29.309846   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:26.937193   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:28.937302   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:27.320052   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:29.820220   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:27.956244   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:29.957111   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:32.456969   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:31.896263   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:31.912378   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:31.912455   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:31.950479   60933 cri.go:89] found id: ""
	I1216 21:02:31.950508   60933 logs.go:282] 0 containers: []
	W1216 21:02:31.950527   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:31.950535   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:31.950600   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:31.990479   60933 cri.go:89] found id: ""
	I1216 21:02:31.990504   60933 logs.go:282] 0 containers: []
	W1216 21:02:31.990515   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:31.990533   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:31.990599   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:32.032808   60933 cri.go:89] found id: ""
	I1216 21:02:32.032834   60933 logs.go:282] 0 containers: []
	W1216 21:02:32.032843   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:32.032853   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:32.032913   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:32.069719   60933 cri.go:89] found id: ""
	I1216 21:02:32.069748   60933 logs.go:282] 0 containers: []
	W1216 21:02:32.069759   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:32.069772   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:32.069830   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:32.106652   60933 cri.go:89] found id: ""
	I1216 21:02:32.106685   60933 logs.go:282] 0 containers: []
	W1216 21:02:32.106694   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:32.106701   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:32.106767   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:32.145921   60933 cri.go:89] found id: ""
	I1216 21:02:32.145949   60933 logs.go:282] 0 containers: []
	W1216 21:02:32.145957   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:32.145963   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:32.146014   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:32.206313   60933 cri.go:89] found id: ""
	I1216 21:02:32.206342   60933 logs.go:282] 0 containers: []
	W1216 21:02:32.206351   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:32.206356   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:32.206410   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:32.262757   60933 cri.go:89] found id: ""
	I1216 21:02:32.262794   60933 logs.go:282] 0 containers: []
	W1216 21:02:32.262806   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:32.262818   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:32.262832   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:32.320221   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:32.320251   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:32.375395   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:32.375437   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:32.391103   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:32.391137   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:32.474709   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:32.474741   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:32.474757   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:31.436689   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:33.436921   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:32.320631   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:34.819726   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:34.956369   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:37.455577   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:35.058809   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:35.073074   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:35.073157   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:35.115280   60933 cri.go:89] found id: ""
	I1216 21:02:35.115305   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.115312   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:35.115318   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:35.115378   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:35.151561   60933 cri.go:89] found id: ""
	I1216 21:02:35.151589   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.151597   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:35.151603   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:35.151654   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:35.192061   60933 cri.go:89] found id: ""
	I1216 21:02:35.192088   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.192095   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:35.192111   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:35.192161   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:35.231493   60933 cri.go:89] found id: ""
	I1216 21:02:35.231523   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.231531   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:35.231538   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:35.231586   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:35.271236   60933 cri.go:89] found id: ""
	I1216 21:02:35.271291   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.271300   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:35.271306   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:35.271368   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:35.309950   60933 cri.go:89] found id: ""
	I1216 21:02:35.309980   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.309991   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:35.309999   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:35.310062   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:35.347762   60933 cri.go:89] found id: ""
	I1216 21:02:35.347790   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.347797   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:35.347803   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:35.347851   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:35.390732   60933 cri.go:89] found id: ""
	I1216 21:02:35.390757   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.390765   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:35.390774   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:35.390785   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:35.447068   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:35.447112   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:35.462873   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:35.462904   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:35.541120   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:35.541145   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:35.541162   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:35.627073   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:35.627120   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:38.170994   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:38.194371   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:38.194434   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:38.248023   60933 cri.go:89] found id: ""
	I1216 21:02:38.248050   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.248061   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:38.248069   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:38.248147   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:38.300143   60933 cri.go:89] found id: ""
	I1216 21:02:38.300175   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.300185   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:38.300193   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:38.300253   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:38.345273   60933 cri.go:89] found id: ""
	I1216 21:02:38.345300   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.345308   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:38.345314   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:38.345389   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:38.383032   60933 cri.go:89] found id: ""
	I1216 21:02:38.383066   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.383075   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:38.383081   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:38.383135   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:38.426042   60933 cri.go:89] found id: ""
	I1216 21:02:38.426074   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.426086   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:38.426094   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:38.426159   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:38.467596   60933 cri.go:89] found id: ""
	I1216 21:02:38.467625   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.467634   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:38.467640   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:38.467692   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:38.509340   60933 cri.go:89] found id: ""
	I1216 21:02:38.509380   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.509391   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:38.509399   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:38.509470   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:38.549306   60933 cri.go:89] found id: ""
	I1216 21:02:38.549337   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.549354   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:38.549365   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:38.549381   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:38.564091   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:38.564131   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:38.639173   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:38.639201   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:38.639219   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:38.716320   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:38.716376   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:38.756779   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:38.756815   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:35.437230   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:37.938595   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:36.820302   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:39.319712   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:39.954558   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:41.955761   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:41.310680   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:41.327606   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:41.327684   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:41.371622   60933 cri.go:89] found id: ""
	I1216 21:02:41.371657   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.371670   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:41.371679   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:41.371739   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:41.408149   60933 cri.go:89] found id: ""
	I1216 21:02:41.408187   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.408198   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:41.408203   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:41.408252   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:41.448445   60933 cri.go:89] found id: ""
	I1216 21:02:41.448471   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.448478   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:41.448484   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:41.448533   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:41.489957   60933 cri.go:89] found id: ""
	I1216 21:02:41.489989   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.490000   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:41.490007   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:41.490069   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:41.532891   60933 cri.go:89] found id: ""
	I1216 21:02:41.532918   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.532930   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:41.532937   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:41.532992   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:41.570315   60933 cri.go:89] found id: ""
	I1216 21:02:41.570342   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.570351   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:41.570357   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:41.570455   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:41.606833   60933 cri.go:89] found id: ""
	I1216 21:02:41.606867   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.606880   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:41.606890   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:41.606959   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:41.643862   60933 cri.go:89] found id: ""
	I1216 21:02:41.643886   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.643894   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:41.643902   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:41.643914   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:41.657621   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:41.657654   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:41.732256   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:41.732281   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:41.732295   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:41.822045   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:41.822081   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:41.863900   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:41.863933   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:40.436149   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:42.436247   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:44.436916   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:41.321155   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:43.819721   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:43.956057   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:46.455802   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:44.425154   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:44.440148   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:44.440223   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:44.478216   60933 cri.go:89] found id: ""
	I1216 21:02:44.478247   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.478258   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:44.478266   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:44.478329   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:44.517054   60933 cri.go:89] found id: ""
	I1216 21:02:44.517078   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.517084   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:44.517090   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:44.517137   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:44.554683   60933 cri.go:89] found id: ""
	I1216 21:02:44.554778   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.554801   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:44.554845   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:44.554927   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:44.600748   60933 cri.go:89] found id: ""
	I1216 21:02:44.600788   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.600800   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:44.600809   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:44.600863   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:44.637564   60933 cri.go:89] found id: ""
	I1216 21:02:44.637592   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.637600   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:44.637606   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:44.637656   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:44.676619   60933 cri.go:89] found id: ""
	I1216 21:02:44.676662   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.676674   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:44.676683   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:44.676755   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:44.715920   60933 cri.go:89] found id: ""
	I1216 21:02:44.715956   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.715964   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:44.715970   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:44.716027   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:44.755134   60933 cri.go:89] found id: ""
	I1216 21:02:44.755167   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.755179   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:44.755191   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:44.755202   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:44.796135   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:44.796164   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:44.850550   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:44.850593   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:44.865278   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:44.865305   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:44.942987   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:44.943013   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:44.943026   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:47.529850   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:47.546292   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:47.546369   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:47.589597   60933 cri.go:89] found id: ""
	I1216 21:02:47.589627   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.589640   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:47.589648   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:47.589713   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:47.630998   60933 cri.go:89] found id: ""
	I1216 21:02:47.631030   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.631043   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:47.631051   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:47.631118   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:47.670118   60933 cri.go:89] found id: ""
	I1216 21:02:47.670150   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.670162   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:47.670169   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:47.670233   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:47.714516   60933 cri.go:89] found id: ""
	I1216 21:02:47.714549   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.714560   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:47.714568   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:47.714631   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:47.752042   60933 cri.go:89] found id: ""
	I1216 21:02:47.752074   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.752086   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:47.752093   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:47.752158   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:47.793612   60933 cri.go:89] found id: ""
	I1216 21:02:47.793645   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.793656   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:47.793664   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:47.793734   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:47.833489   60933 cri.go:89] found id: ""
	I1216 21:02:47.833518   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.833529   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:47.833541   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:47.833602   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:47.869744   60933 cri.go:89] found id: ""
	I1216 21:02:47.869772   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.869783   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:47.869793   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:47.869809   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:47.910640   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:47.910674   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:47.965747   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:47.965781   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:47.979760   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:47.979786   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:48.056887   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:48.056917   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:48.056933   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:46.439409   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:48.937248   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:46.320935   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:48.820700   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:48.955697   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:50.955859   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:50.641224   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:50.657267   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:50.657346   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:50.696890   60933 cri.go:89] found id: ""
	I1216 21:02:50.696916   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.696924   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:50.696930   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:50.696993   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:50.734485   60933 cri.go:89] found id: ""
	I1216 21:02:50.734514   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.734524   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:50.734533   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:50.734598   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:50.776241   60933 cri.go:89] found id: ""
	I1216 21:02:50.776268   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.776277   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:50.776283   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:50.776358   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:50.816449   60933 cri.go:89] found id: ""
	I1216 21:02:50.816482   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.816493   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:50.816501   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:50.816561   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:50.857458   60933 cri.go:89] found id: ""
	I1216 21:02:50.857481   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.857488   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:50.857494   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:50.857556   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:50.895367   60933 cri.go:89] found id: ""
	I1216 21:02:50.895391   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.895398   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:50.895404   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:50.895466   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:50.934101   60933 cri.go:89] found id: ""
	I1216 21:02:50.934128   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.934138   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:50.934152   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:50.934212   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:50.978625   60933 cri.go:89] found id: ""
	I1216 21:02:50.978654   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.978665   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:50.978675   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:50.978688   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:51.061867   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:51.061908   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:51.101188   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:51.101228   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:51.157426   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:51.157470   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:51.172835   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:51.172882   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:51.247678   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:53.748503   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:53.763357   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:53.763425   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:53.807963   60933 cri.go:89] found id: ""
	I1216 21:02:53.807990   60933 logs.go:282] 0 containers: []
	W1216 21:02:53.807999   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:53.808005   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:53.808063   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:53.846840   60933 cri.go:89] found id: ""
	I1216 21:02:53.846867   60933 logs.go:282] 0 containers: []
	W1216 21:02:53.846876   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:53.846881   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:53.846929   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:53.885099   60933 cri.go:89] found id: ""
	I1216 21:02:53.885131   60933 logs.go:282] 0 containers: []
	W1216 21:02:53.885146   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:53.885156   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:53.885226   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:53.923859   60933 cri.go:89] found id: ""
	I1216 21:02:53.923890   60933 logs.go:282] 0 containers: []
	W1216 21:02:53.923901   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:53.923908   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:53.923972   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:53.964150   60933 cri.go:89] found id: ""
	I1216 21:02:53.964176   60933 logs.go:282] 0 containers: []
	W1216 21:02:53.964186   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:53.964201   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:53.964265   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:54.004676   60933 cri.go:89] found id: ""
	I1216 21:02:54.004707   60933 logs.go:282] 0 containers: []
	W1216 21:02:54.004718   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:54.004725   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:54.004789   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:54.042560   60933 cri.go:89] found id: ""
	I1216 21:02:54.042585   60933 logs.go:282] 0 containers: []
	W1216 21:02:54.042595   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:54.042603   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:54.042666   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:54.081002   60933 cri.go:89] found id: ""
	I1216 21:02:54.081030   60933 logs.go:282] 0 containers: []
	W1216 21:02:54.081038   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:54.081046   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:54.081058   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:54.132825   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:54.132865   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:54.147793   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:54.147821   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:54.226668   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:54.226692   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:54.226704   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:54.307792   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:54.307832   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:50.938230   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:53.436746   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:50.820949   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:53.320283   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:52.957187   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:54.958212   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:57.456612   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:56.852207   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:56.866404   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:56.866469   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:56.911786   60933 cri.go:89] found id: ""
	I1216 21:02:56.911811   60933 logs.go:282] 0 containers: []
	W1216 21:02:56.911820   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:56.911829   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:56.911886   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:56.953491   60933 cri.go:89] found id: ""
	I1216 21:02:56.953520   60933 logs.go:282] 0 containers: []
	W1216 21:02:56.953535   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:56.953543   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:56.953610   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:56.991569   60933 cri.go:89] found id: ""
	I1216 21:02:56.991605   60933 logs.go:282] 0 containers: []
	W1216 21:02:56.991616   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:56.991622   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:56.991685   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:57.026808   60933 cri.go:89] found id: ""
	I1216 21:02:57.026837   60933 logs.go:282] 0 containers: []
	W1216 21:02:57.026845   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:57.026851   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:57.026913   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:57.065539   60933 cri.go:89] found id: ""
	I1216 21:02:57.065569   60933 logs.go:282] 0 containers: []
	W1216 21:02:57.065577   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:57.065583   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:57.065642   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:57.103911   60933 cri.go:89] found id: ""
	I1216 21:02:57.103942   60933 logs.go:282] 0 containers: []
	W1216 21:02:57.103952   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:57.103960   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:57.104015   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:57.141177   60933 cri.go:89] found id: ""
	I1216 21:02:57.141200   60933 logs.go:282] 0 containers: []
	W1216 21:02:57.141207   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:57.141213   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:57.141262   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:57.178532   60933 cri.go:89] found id: ""
	I1216 21:02:57.178590   60933 logs.go:282] 0 containers: []
	W1216 21:02:57.178604   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:57.178614   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:57.178629   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:57.234811   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:57.234846   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:57.251540   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:57.251569   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:57.329029   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:57.329061   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:57.329077   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:57.412624   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:57.412665   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:55.436981   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:57.438061   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:55.819607   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:57.819648   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:59.820705   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:59.955043   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:01.956284   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:59.960422   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:59.974889   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:59.974966   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:00.012641   60933 cri.go:89] found id: ""
	I1216 21:03:00.012669   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.012676   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:00.012682   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:00.012730   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:00.053730   60933 cri.go:89] found id: ""
	I1216 21:03:00.053766   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.053778   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:00.053785   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:00.053847   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:00.091213   60933 cri.go:89] found id: ""
	I1216 21:03:00.091261   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.091274   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:00.091283   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:00.091357   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:00.131357   60933 cri.go:89] found id: ""
	I1216 21:03:00.131382   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.131390   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:00.131396   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:00.131460   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:00.168331   60933 cri.go:89] found id: ""
	I1216 21:03:00.168362   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.168373   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:00.168380   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:00.168446   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:00.208326   60933 cri.go:89] found id: ""
	I1216 21:03:00.208360   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.208369   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:00.208377   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:00.208440   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:00.245775   60933 cri.go:89] found id: ""
	I1216 21:03:00.245800   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.245808   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:00.245814   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:00.245863   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:00.283062   60933 cri.go:89] found id: ""
	I1216 21:03:00.283091   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.283100   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:00.283108   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:00.283119   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:00.358767   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:00.358787   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:00.358799   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:00.443422   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:00.443460   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:00.491511   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:00.491551   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:00.566131   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:00.566172   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:03.080319   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:03.094733   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:03.094818   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:03.132388   60933 cri.go:89] found id: ""
	I1216 21:03:03.132419   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.132428   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:03.132433   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:03.132488   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:03.172345   60933 cri.go:89] found id: ""
	I1216 21:03:03.172374   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.172386   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:03.172393   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:03.172474   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:03.210444   60933 cri.go:89] found id: ""
	I1216 21:03:03.210479   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.210488   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:03.210494   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:03.210544   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:03.248605   60933 cri.go:89] found id: ""
	I1216 21:03:03.248644   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.248656   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:03.248664   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:03.248723   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:03.286822   60933 cri.go:89] found id: ""
	I1216 21:03:03.286854   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.286862   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:03.286868   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:03.286921   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:03.329304   60933 cri.go:89] found id: ""
	I1216 21:03:03.329333   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.329344   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:03.329352   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:03.329417   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:03.367337   60933 cri.go:89] found id: ""
	I1216 21:03:03.367361   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.367368   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:03.367373   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:03.367420   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:03.409799   60933 cri.go:89] found id: ""
	I1216 21:03:03.409821   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.409829   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:03.409838   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:03.409850   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:03.466941   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:03.466976   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:03.483090   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:03.483117   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:03.566835   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:03.566860   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:03.566878   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:03.649747   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:03.649793   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:59.936221   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:01.936251   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:03.936714   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:02.319063   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:04.319653   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:03.956397   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:05.956531   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:06.193505   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:06.207797   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:06.207878   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:06.245401   60933 cri.go:89] found id: ""
	I1216 21:03:06.245437   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.245448   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:06.245456   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:06.245521   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:06.301205   60933 cri.go:89] found id: ""
	I1216 21:03:06.301239   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.301250   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:06.301257   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:06.301326   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:06.340325   60933 cri.go:89] found id: ""
	I1216 21:03:06.340352   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.340362   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:06.340369   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:06.340429   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:06.378321   60933 cri.go:89] found id: ""
	I1216 21:03:06.378351   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.378359   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:06.378365   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:06.378422   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:06.416354   60933 cri.go:89] found id: ""
	I1216 21:03:06.416390   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.416401   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:06.416409   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:06.416473   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:06.459926   60933 cri.go:89] found id: ""
	I1216 21:03:06.459955   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.459967   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:06.459975   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:06.460063   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:06.501818   60933 cri.go:89] found id: ""
	I1216 21:03:06.501849   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.501860   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:06.501866   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:06.501926   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:06.537552   60933 cri.go:89] found id: ""
	I1216 21:03:06.537583   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.537598   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:06.537607   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:06.537621   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:06.592170   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:06.592212   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:06.607148   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:06.607183   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:06.676114   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:06.676140   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:06.676151   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:06.756009   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:06.756052   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:09.298166   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:09.313104   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:09.313189   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:09.356598   60933 cri.go:89] found id: ""
	I1216 21:03:09.356625   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.356640   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:09.356649   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:09.356715   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:05.937241   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:07.938858   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:06.322260   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:08.818974   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:08.455838   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:10.955332   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:09.395406   60933 cri.go:89] found id: ""
	I1216 21:03:09.395439   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.395449   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:09.395456   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:09.395521   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:09.440401   60933 cri.go:89] found id: ""
	I1216 21:03:09.440423   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.440430   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:09.440435   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:09.440504   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:09.478798   60933 cri.go:89] found id: ""
	I1216 21:03:09.478828   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.478843   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:09.478853   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:09.478921   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:09.515542   60933 cri.go:89] found id: ""
	I1216 21:03:09.515575   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.515587   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:09.515596   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:09.515654   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:09.554150   60933 cri.go:89] found id: ""
	I1216 21:03:09.554183   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.554194   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:09.554205   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:09.554279   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:09.591699   60933 cri.go:89] found id: ""
	I1216 21:03:09.591730   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.591740   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:09.591747   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:09.591811   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:09.629938   60933 cri.go:89] found id: ""
	I1216 21:03:09.629970   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.629980   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:09.629991   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:09.630008   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:09.711255   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:09.711284   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:09.711300   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:09.790202   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:09.790243   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:09.839567   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:09.839597   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:09.893010   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:09.893050   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:12.409934   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:12.423715   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:12.423789   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:12.461995   60933 cri.go:89] found id: ""
	I1216 21:03:12.462038   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.462046   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:12.462052   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:12.462101   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:12.501738   60933 cri.go:89] found id: ""
	I1216 21:03:12.501769   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.501779   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:12.501785   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:12.501833   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:12.541758   60933 cri.go:89] found id: ""
	I1216 21:03:12.541785   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.541795   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:12.541802   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:12.541850   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:12.579173   60933 cri.go:89] found id: ""
	I1216 21:03:12.579199   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.579206   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:12.579212   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:12.579302   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:12.624382   60933 cri.go:89] found id: ""
	I1216 21:03:12.624407   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.624418   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:12.624426   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:12.624488   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:12.665139   60933 cri.go:89] found id: ""
	I1216 21:03:12.665178   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.665190   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:12.665200   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:12.665274   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:12.711586   60933 cri.go:89] found id: ""
	I1216 21:03:12.711611   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.711619   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:12.711627   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:12.711678   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:12.761566   60933 cri.go:89] found id: ""
	I1216 21:03:12.761600   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.761612   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:12.761624   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:12.761640   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:12.824282   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:12.824315   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:12.839335   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:12.839371   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:12.918317   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:12.918341   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:12.918357   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:13.000375   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:13.000410   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:10.438136   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:12.936742   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:11.319284   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:13.320036   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:15.322965   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:12.955450   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:14.956186   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:16.956603   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:15.542372   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:15.556877   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:15.556960   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:15.599345   60933 cri.go:89] found id: ""
	I1216 21:03:15.599378   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.599389   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:15.599414   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:15.599479   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:15.642072   60933 cri.go:89] found id: ""
	I1216 21:03:15.642106   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.642116   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:15.642124   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:15.642189   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:15.679989   60933 cri.go:89] found id: ""
	I1216 21:03:15.680025   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.680036   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:15.680044   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:15.680103   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:15.718343   60933 cri.go:89] found id: ""
	I1216 21:03:15.718371   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.718378   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:15.718384   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:15.718433   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:15.759937   60933 cri.go:89] found id: ""
	I1216 21:03:15.759971   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.759981   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:15.759988   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:15.760081   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:15.801434   60933 cri.go:89] found id: ""
	I1216 21:03:15.801463   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.801471   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:15.801477   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:15.801540   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:15.841855   60933 cri.go:89] found id: ""
	I1216 21:03:15.841879   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.841886   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:15.841892   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:15.841962   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:15.883951   60933 cri.go:89] found id: ""
	I1216 21:03:15.883974   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.883982   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:15.883990   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:15.884004   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:15.960868   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:15.960902   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:16.005700   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:16.005730   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:16.061128   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:16.061165   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:16.075601   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:16.075630   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:16.147810   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:18.648677   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:18.663298   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:18.663367   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:18.713281   60933 cri.go:89] found id: ""
	I1216 21:03:18.713313   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.713324   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:18.713332   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:18.713396   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:18.764861   60933 cri.go:89] found id: ""
	I1216 21:03:18.764892   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.764905   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:18.764912   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:18.764978   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:18.816140   60933 cri.go:89] found id: ""
	I1216 21:03:18.816170   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.816180   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:18.816188   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:18.816251   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:18.852118   60933 cri.go:89] found id: ""
	I1216 21:03:18.852151   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.852163   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:18.852171   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:18.852235   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:18.887996   60933 cri.go:89] found id: ""
	I1216 21:03:18.888018   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.888025   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:18.888031   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:18.888089   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:18.925415   60933 cri.go:89] found id: ""
	I1216 21:03:18.925437   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.925445   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:18.925451   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:18.925498   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:18.964853   60933 cri.go:89] found id: ""
	I1216 21:03:18.964884   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.964892   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:18.964897   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:18.964964   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:19.000822   60933 cri.go:89] found id: ""
	I1216 21:03:19.000848   60933 logs.go:282] 0 containers: []
	W1216 21:03:19.000856   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:19.000865   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:19.000879   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:19.051571   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:19.051612   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:19.066737   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:19.066767   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:19.143120   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:19.143144   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:19.143156   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:19.229811   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:19.229850   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:15.437189   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:17.439345   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:17.820374   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:19.820460   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:19.455707   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:21.955275   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:21.776440   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:21.792869   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:21.792951   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:21.831100   60933 cri.go:89] found id: ""
	I1216 21:03:21.831127   60933 logs.go:282] 0 containers: []
	W1216 21:03:21.831134   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:21.831140   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:21.831196   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:21.869124   60933 cri.go:89] found id: ""
	I1216 21:03:21.869147   60933 logs.go:282] 0 containers: []
	W1216 21:03:21.869155   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:21.869160   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:21.869215   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:21.909891   60933 cri.go:89] found id: ""
	I1216 21:03:21.909926   60933 logs.go:282] 0 containers: []
	W1216 21:03:21.909938   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:21.909946   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:21.910032   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:21.949140   60933 cri.go:89] found id: ""
	I1216 21:03:21.949169   60933 logs.go:282] 0 containers: []
	W1216 21:03:21.949179   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:21.949186   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:21.949245   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:21.987741   60933 cri.go:89] found id: ""
	I1216 21:03:21.987771   60933 logs.go:282] 0 containers: []
	W1216 21:03:21.987780   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:21.987785   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:21.987839   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:22.025565   60933 cri.go:89] found id: ""
	I1216 21:03:22.025593   60933 logs.go:282] 0 containers: []
	W1216 21:03:22.025601   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:22.025607   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:22.025659   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:22.062076   60933 cri.go:89] found id: ""
	I1216 21:03:22.062110   60933 logs.go:282] 0 containers: []
	W1216 21:03:22.062120   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:22.062127   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:22.062198   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:22.102037   60933 cri.go:89] found id: ""
	I1216 21:03:22.102065   60933 logs.go:282] 0 containers: []
	W1216 21:03:22.102093   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:22.102105   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:22.102122   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:22.159185   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:22.159219   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:22.175139   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:22.175168   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:22.255769   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:22.255801   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:22.255817   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:22.339633   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:22.339681   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:19.937328   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:22.435709   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:24.436704   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:22.319227   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:24.819278   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:24.455668   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:26.956382   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:24.883865   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:24.898198   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:24.898287   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:24.939472   60933 cri.go:89] found id: ""
	I1216 21:03:24.939500   60933 logs.go:282] 0 containers: []
	W1216 21:03:24.939511   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:24.939518   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:24.939583   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:24.981798   60933 cri.go:89] found id: ""
	I1216 21:03:24.981822   60933 logs.go:282] 0 containers: []
	W1216 21:03:24.981829   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:24.981834   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:24.981889   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:25.021332   60933 cri.go:89] found id: ""
	I1216 21:03:25.021366   60933 logs.go:282] 0 containers: []
	W1216 21:03:25.021373   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:25.021379   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:25.021431   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:25.057811   60933 cri.go:89] found id: ""
	I1216 21:03:25.057836   60933 logs.go:282] 0 containers: []
	W1216 21:03:25.057843   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:25.057848   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:25.057907   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:25.093852   60933 cri.go:89] found id: ""
	I1216 21:03:25.093881   60933 logs.go:282] 0 containers: []
	W1216 21:03:25.093890   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:25.093895   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:25.093945   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:25.132779   60933 cri.go:89] found id: ""
	I1216 21:03:25.132813   60933 logs.go:282] 0 containers: []
	W1216 21:03:25.132825   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:25.132834   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:25.132912   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:25.173942   60933 cri.go:89] found id: ""
	I1216 21:03:25.173967   60933 logs.go:282] 0 containers: []
	W1216 21:03:25.173974   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:25.173990   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:25.174048   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:25.213105   60933 cri.go:89] found id: ""
	I1216 21:03:25.213127   60933 logs.go:282] 0 containers: []
	W1216 21:03:25.213135   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:25.213144   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:25.213155   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:25.267517   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:25.267557   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:25.284144   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:25.284177   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:25.362901   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:25.362931   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:25.362947   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:25.450193   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:25.450227   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:27.995716   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:28.012044   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:28.012138   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:28.050404   60933 cri.go:89] found id: ""
	I1216 21:03:28.050432   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.050441   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:28.050446   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:28.050492   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:28.087830   60933 cri.go:89] found id: ""
	I1216 21:03:28.087855   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.087862   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:28.087885   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:28.087933   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:28.125122   60933 cri.go:89] found id: ""
	I1216 21:03:28.125147   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.125154   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:28.125160   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:28.125233   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:28.160619   60933 cri.go:89] found id: ""
	I1216 21:03:28.160646   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.160655   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:28.160661   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:28.160726   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:28.198951   60933 cri.go:89] found id: ""
	I1216 21:03:28.198977   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.198986   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:28.198993   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:28.199059   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:28.236596   60933 cri.go:89] found id: ""
	I1216 21:03:28.236621   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.236629   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:28.236635   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:28.236707   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:28.273955   60933 cri.go:89] found id: ""
	I1216 21:03:28.273979   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.273986   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:28.273992   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:28.274061   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:28.311908   60933 cri.go:89] found id: ""
	I1216 21:03:28.311943   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.311954   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:28.311965   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:28.311979   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:28.363870   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:28.363910   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:28.379919   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:28.379945   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:28.459998   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:28.460019   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:28.460030   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:28.543229   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:28.543306   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:26.936661   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:29.437169   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:26.820563   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:29.319981   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:28.956791   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:31.456708   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:31.086525   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:31.100833   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:31.100950   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:31.141356   60933 cri.go:89] found id: ""
	I1216 21:03:31.141385   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.141396   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:31.141403   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:31.141465   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:31.176609   60933 cri.go:89] found id: ""
	I1216 21:03:31.176641   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.176650   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:31.176657   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:31.176721   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:31.213959   60933 cri.go:89] found id: ""
	I1216 21:03:31.213984   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.213991   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:31.213997   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:31.214058   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:31.255183   60933 cri.go:89] found id: ""
	I1216 21:03:31.255208   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.255215   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:31.255220   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:31.255297   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:31.293475   60933 cri.go:89] found id: ""
	I1216 21:03:31.293501   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.293508   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:31.293514   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:31.293561   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:31.332010   60933 cri.go:89] found id: ""
	I1216 21:03:31.332041   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.332052   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:31.332061   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:31.332119   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:31.370301   60933 cri.go:89] found id: ""
	I1216 21:03:31.370331   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.370342   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:31.370349   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:31.370414   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:31.419526   60933 cri.go:89] found id: ""
	I1216 21:03:31.419553   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.419561   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:31.419570   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:31.419583   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:31.480125   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:31.480160   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:31.495464   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:31.495497   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:31.570747   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:31.570773   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:31.570788   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:31.651521   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:31.651564   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:34.200969   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:34.216519   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:34.216596   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:34.254185   60933 cri.go:89] found id: ""
	I1216 21:03:34.254218   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.254227   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:34.254242   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:34.254312   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:34.293194   60933 cri.go:89] found id: ""
	I1216 21:03:34.293225   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.293236   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:34.293242   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:34.293297   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:34.335002   60933 cri.go:89] found id: ""
	I1216 21:03:34.335030   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.335042   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:34.335050   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:34.335112   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:34.370854   60933 cri.go:89] found id: ""
	I1216 21:03:34.370880   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.370887   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:34.370893   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:34.370938   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:31.439597   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:33.935941   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:31.820337   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:33.820497   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:33.955185   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:36.455713   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:34.409155   60933 cri.go:89] found id: ""
	I1216 21:03:34.409181   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.409189   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:34.409195   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:34.409256   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:34.448555   60933 cri.go:89] found id: ""
	I1216 21:03:34.448583   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.448594   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:34.448601   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:34.448663   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:34.486800   60933 cri.go:89] found id: ""
	I1216 21:03:34.486829   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.486842   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:34.486851   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:34.486919   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:34.530274   60933 cri.go:89] found id: ""
	I1216 21:03:34.530299   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.530307   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:34.530317   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:34.530335   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:34.601587   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:34.601620   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:34.601637   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:34.680215   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:34.680250   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:34.721362   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:34.721389   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:34.776652   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:34.776693   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:37.292877   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:37.306976   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:37.307060   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:37.349370   60933 cri.go:89] found id: ""
	I1216 21:03:37.349405   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.349416   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:37.349424   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:37.349486   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:37.387213   60933 cri.go:89] found id: ""
	I1216 21:03:37.387271   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.387285   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:37.387294   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:37.387361   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:37.427138   60933 cri.go:89] found id: ""
	I1216 21:03:37.427164   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.427175   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:37.427182   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:37.427269   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:37.466751   60933 cri.go:89] found id: ""
	I1216 21:03:37.466776   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.466783   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:37.466788   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:37.466846   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:37.505078   60933 cri.go:89] found id: ""
	I1216 21:03:37.505115   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.505123   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:37.505128   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:37.505189   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:37.548642   60933 cri.go:89] found id: ""
	I1216 21:03:37.548665   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.548673   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:37.548679   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:37.548738   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:37.592354   60933 cri.go:89] found id: ""
	I1216 21:03:37.592379   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.592386   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:37.592391   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:37.592441   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:37.631179   60933 cri.go:89] found id: ""
	I1216 21:03:37.631212   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.631221   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:37.631230   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:37.631261   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:37.683021   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:37.683062   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:37.698056   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:37.698087   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:37.774368   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:37.774397   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:37.774422   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:37.860470   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:37.860511   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:35.936409   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:37.936652   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:36.319436   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:38.819727   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:38.456251   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:40.957354   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:40.405278   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:40.420390   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:40.420473   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:40.463963   60933 cri.go:89] found id: ""
	I1216 21:03:40.463994   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.464033   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:40.464041   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:40.464107   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:40.510321   60933 cri.go:89] found id: ""
	I1216 21:03:40.510352   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.510369   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:40.510376   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:40.510441   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:40.546580   60933 cri.go:89] found id: ""
	I1216 21:03:40.546610   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.546619   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:40.546624   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:40.546686   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:40.583109   60933 cri.go:89] found id: ""
	I1216 21:03:40.583136   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.583144   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:40.583149   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:40.583202   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:40.628747   60933 cri.go:89] found id: ""
	I1216 21:03:40.628771   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.628778   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:40.628784   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:40.628845   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:40.663757   60933 cri.go:89] found id: ""
	I1216 21:03:40.663785   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.663796   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:40.663804   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:40.663867   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:40.703483   60933 cri.go:89] found id: ""
	I1216 21:03:40.703513   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.703522   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:40.703528   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:40.703592   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:40.742585   60933 cri.go:89] found id: ""
	I1216 21:03:40.742622   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.742632   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:40.742641   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:40.742653   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:40.757771   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:40.757809   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:40.837615   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:40.837642   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:40.837656   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:40.915403   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:40.915442   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:40.960762   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:40.960790   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:43.515302   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:43.530831   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:43.530906   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:43.571680   60933 cri.go:89] found id: ""
	I1216 21:03:43.571704   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.571712   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:43.571718   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:43.571779   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:43.615912   60933 cri.go:89] found id: ""
	I1216 21:03:43.615940   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.615948   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:43.615955   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:43.616013   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:43.654206   60933 cri.go:89] found id: ""
	I1216 21:03:43.654231   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.654241   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:43.654249   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:43.654309   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:43.690509   60933 cri.go:89] found id: ""
	I1216 21:03:43.690533   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.690541   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:43.690548   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:43.690595   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:43.728601   60933 cri.go:89] found id: ""
	I1216 21:03:43.728627   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.728634   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:43.728639   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:43.728685   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:43.769092   60933 cri.go:89] found id: ""
	I1216 21:03:43.769130   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.769198   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:43.769215   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:43.769292   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:43.812492   60933 cri.go:89] found id: ""
	I1216 21:03:43.812525   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.812537   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:43.812544   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:43.812613   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:43.852748   60933 cri.go:89] found id: ""
	I1216 21:03:43.852778   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.852787   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:43.852795   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:43.852807   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:43.907800   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:43.907839   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:43.922806   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:43.922833   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:44.002511   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:44.002538   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:44.002551   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:44.081760   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:44.081801   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:40.437134   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:42.437214   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:40.820244   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:43.321298   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:43.455891   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:45.456281   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:46.625868   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:46.640266   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:46.640341   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:46.677137   60933 cri.go:89] found id: ""
	I1216 21:03:46.677168   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.677179   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:46.677185   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:46.677241   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:46.714340   60933 cri.go:89] found id: ""
	I1216 21:03:46.714373   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.714382   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:46.714389   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:46.714449   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:46.752713   60933 cri.go:89] found id: ""
	I1216 21:03:46.752743   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.752754   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:46.752763   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:46.752827   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:46.790787   60933 cri.go:89] found id: ""
	I1216 21:03:46.790821   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.790837   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:46.790845   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:46.790902   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:46.827905   60933 cri.go:89] found id: ""
	I1216 21:03:46.827934   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.827945   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:46.827954   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:46.828023   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:46.863522   60933 cri.go:89] found id: ""
	I1216 21:03:46.863547   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.863563   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:46.863570   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:46.863634   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:46.906005   60933 cri.go:89] found id: ""
	I1216 21:03:46.906035   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.906044   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:46.906049   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:46.906103   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:46.947639   60933 cri.go:89] found id: ""
	I1216 21:03:46.947668   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.947679   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:46.947691   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:46.947706   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:47.001693   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:47.001732   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:47.023122   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:47.023166   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:47.108257   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:47.108291   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:47.108303   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:47.184768   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:47.184807   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:44.940074   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:47.437155   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:45.819943   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:47.820443   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:49.820700   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:47.955794   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:49.960595   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:52.455630   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:49.729433   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:49.743836   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:49.743903   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:49.783021   60933 cri.go:89] found id: ""
	I1216 21:03:49.783054   60933 logs.go:282] 0 containers: []
	W1216 21:03:49.783066   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:49.783074   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:49.783144   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:49.820371   60933 cri.go:89] found id: ""
	I1216 21:03:49.820399   60933 logs.go:282] 0 containers: []
	W1216 21:03:49.820409   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:49.820416   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:49.820476   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:49.857918   60933 cri.go:89] found id: ""
	I1216 21:03:49.857948   60933 logs.go:282] 0 containers: []
	W1216 21:03:49.857959   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:49.857967   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:49.858033   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:49.899517   60933 cri.go:89] found id: ""
	I1216 21:03:49.899548   60933 logs.go:282] 0 containers: []
	W1216 21:03:49.899558   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:49.899565   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:49.899632   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:49.938771   60933 cri.go:89] found id: ""
	I1216 21:03:49.938797   60933 logs.go:282] 0 containers: []
	W1216 21:03:49.938805   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:49.938810   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:49.938857   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:49.975748   60933 cri.go:89] found id: ""
	I1216 21:03:49.975781   60933 logs.go:282] 0 containers: []
	W1216 21:03:49.975792   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:49.975800   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:49.975876   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:50.013057   60933 cri.go:89] found id: ""
	I1216 21:03:50.013082   60933 logs.go:282] 0 containers: []
	W1216 21:03:50.013090   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:50.013127   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:50.013178   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:50.049106   60933 cri.go:89] found id: ""
	I1216 21:03:50.049138   60933 logs.go:282] 0 containers: []
	W1216 21:03:50.049150   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:50.049161   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:50.049176   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:50.063815   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:50.063847   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:50.137801   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:50.137826   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:50.137841   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:50.218456   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:50.218495   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:50.263347   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:50.263379   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:52.824077   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:52.838096   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:52.838185   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:52.880550   60933 cri.go:89] found id: ""
	I1216 21:03:52.880582   60933 logs.go:282] 0 containers: []
	W1216 21:03:52.880593   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:52.880600   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:52.880658   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:52.919728   60933 cri.go:89] found id: ""
	I1216 21:03:52.919751   60933 logs.go:282] 0 containers: []
	W1216 21:03:52.919759   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:52.919764   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:52.919819   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:52.957519   60933 cri.go:89] found id: ""
	I1216 21:03:52.957542   60933 logs.go:282] 0 containers: []
	W1216 21:03:52.957549   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:52.957555   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:52.957607   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:52.996631   60933 cri.go:89] found id: ""
	I1216 21:03:52.996663   60933 logs.go:282] 0 containers: []
	W1216 21:03:52.996673   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:52.996681   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:52.996745   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:53.059902   60933 cri.go:89] found id: ""
	I1216 21:03:53.060014   60933 logs.go:282] 0 containers: []
	W1216 21:03:53.060030   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:53.060039   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:53.060105   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:53.099367   60933 cri.go:89] found id: ""
	I1216 21:03:53.099395   60933 logs.go:282] 0 containers: []
	W1216 21:03:53.099406   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:53.099419   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:53.099486   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:53.140668   60933 cri.go:89] found id: ""
	I1216 21:03:53.140696   60933 logs.go:282] 0 containers: []
	W1216 21:03:53.140704   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:53.140709   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:53.140777   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:53.179182   60933 cri.go:89] found id: ""
	I1216 21:03:53.179208   60933 logs.go:282] 0 containers: []
	W1216 21:03:53.179216   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:53.179225   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:53.179236   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:53.233441   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:53.233481   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:53.247526   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:53.247569   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:53.321868   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:53.321895   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:53.321911   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:53.410904   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:53.410959   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:49.936523   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:51.936955   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:54.441538   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:52.319658   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:54.319887   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:54.955490   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:57.456080   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:55.954371   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:55.968506   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:55.968570   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:56.005087   60933 cri.go:89] found id: ""
	I1216 21:03:56.005118   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.005130   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:56.005137   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:56.005205   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:56.039443   60933 cri.go:89] found id: ""
	I1216 21:03:56.039467   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.039475   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:56.039486   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:56.039537   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:56.078181   60933 cri.go:89] found id: ""
	I1216 21:03:56.078213   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.078224   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:56.078231   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:56.078289   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:56.115809   60933 cri.go:89] found id: ""
	I1216 21:03:56.115833   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.115841   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:56.115848   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:56.115901   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:56.154299   60933 cri.go:89] found id: ""
	I1216 21:03:56.154323   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.154330   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:56.154336   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:56.154395   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:56.193069   60933 cri.go:89] found id: ""
	I1216 21:03:56.193098   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.193106   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:56.193112   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:56.193161   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:56.231067   60933 cri.go:89] found id: ""
	I1216 21:03:56.231099   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.231118   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:56.231125   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:56.231191   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:56.270980   60933 cri.go:89] found id: ""
	I1216 21:03:56.271011   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.271022   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:56.271035   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:56.271050   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:56.321374   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:56.321405   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:56.336802   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:56.336847   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:56.414052   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:56.414078   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:56.414091   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:56.499118   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:56.499158   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:59.049386   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:59.063191   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:59.063300   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:59.102136   60933 cri.go:89] found id: ""
	I1216 21:03:59.102169   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.102180   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:59.102187   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:59.102255   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:59.138311   60933 cri.go:89] found id: ""
	I1216 21:03:59.138340   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.138357   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:59.138364   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:59.138431   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:59.176131   60933 cri.go:89] found id: ""
	I1216 21:03:59.176159   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.176169   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:59.176177   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:59.176259   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:59.214274   60933 cri.go:89] found id: ""
	I1216 21:03:59.214308   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.214320   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:59.214327   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:59.214397   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:59.254499   60933 cri.go:89] found id: ""
	I1216 21:03:59.254524   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.254531   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:59.254537   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:59.254602   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:59.292715   60933 cri.go:89] found id: ""
	I1216 21:03:59.292755   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.292765   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:59.292772   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:59.292836   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:59.333279   60933 cri.go:89] found id: ""
	I1216 21:03:59.333314   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.333325   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:59.333332   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:59.333404   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:59.372071   60933 cri.go:89] found id: ""
	I1216 21:03:59.372104   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.372116   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:59.372126   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:59.372143   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:59.389021   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:59.389051   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 21:03:56.936508   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:59.438217   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:56.323300   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:58.819599   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:59.456242   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:01.956873   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	W1216 21:03:59.503281   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:59.503304   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:59.503316   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:59.581761   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:59.581797   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:59.627604   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:59.627646   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:04:02.179425   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:04:02.195786   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:04:02.195850   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:04:02.239763   60933 cri.go:89] found id: ""
	I1216 21:04:02.239790   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.239801   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:04:02.239809   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:04:02.239873   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:04:02.278885   60933 cri.go:89] found id: ""
	I1216 21:04:02.278914   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.278926   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:04:02.278935   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:04:02.279004   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:04:02.320701   60933 cri.go:89] found id: ""
	I1216 21:04:02.320731   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.320742   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:04:02.320749   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:04:02.320811   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:04:02.357726   60933 cri.go:89] found id: ""
	I1216 21:04:02.357757   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.357767   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:04:02.357773   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:04:02.357826   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:04:02.399577   60933 cri.go:89] found id: ""
	I1216 21:04:02.399609   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.399618   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:04:02.399624   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:04:02.399687   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:04:02.445559   60933 cri.go:89] found id: ""
	I1216 21:04:02.445590   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.445600   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:04:02.445607   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:04:02.445670   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:04:02.482983   60933 cri.go:89] found id: ""
	I1216 21:04:02.483015   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.483027   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:04:02.483035   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:04:02.483116   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:04:02.523028   60933 cri.go:89] found id: ""
	I1216 21:04:02.523055   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.523063   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:04:02.523073   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:04:02.523084   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:04:02.577447   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:04:02.577487   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:04:02.594539   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:04:02.594567   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:04:02.683805   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:04:02.683832   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:04:02.683848   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:04:02.763377   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:04:02.763416   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:04:01.937214   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:04.436771   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:01.319860   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:03.320323   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:04.454654   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:06.456145   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:05.311029   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:04:05.328358   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:04:05.328438   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:04:05.367378   60933 cri.go:89] found id: ""
	I1216 21:04:05.367402   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.367409   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:04:05.367419   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:04:05.367468   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:04:05.406268   60933 cri.go:89] found id: ""
	I1216 21:04:05.406291   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.406301   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:04:05.406306   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:04:05.406353   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:04:05.444737   60933 cri.go:89] found id: ""
	I1216 21:04:05.444767   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.444778   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:04:05.444787   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:04:05.444836   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:04:05.484044   60933 cri.go:89] found id: ""
	I1216 21:04:05.484132   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.484153   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:04:05.484161   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:04:05.484222   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:04:05.523395   60933 cri.go:89] found id: ""
	I1216 21:04:05.523420   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.523431   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:04:05.523439   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:04:05.523501   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:04:05.566925   60933 cri.go:89] found id: ""
	I1216 21:04:05.566954   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.566967   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:04:05.566974   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:04:05.567036   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:04:05.611275   60933 cri.go:89] found id: ""
	I1216 21:04:05.611303   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.611314   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:04:05.611321   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:04:05.611396   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:04:05.650340   60933 cri.go:89] found id: ""
	I1216 21:04:05.650371   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.650379   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:04:05.650389   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:04:05.650400   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:04:05.702277   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:04:05.702321   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:04:05.718685   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:04:05.718713   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:04:05.794979   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:04:05.795005   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:04:05.795020   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:04:05.897348   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:04:05.897383   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:04:08.447268   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:04:08.462553   60933 kubeadm.go:597] duration metric: took 4m2.545161532s to restartPrimaryControlPlane
	W1216 21:04:08.462621   60933 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 21:04:08.462650   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 21:04:06.437699   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:08.936904   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:05.813413   60829 pod_ready.go:82] duration metric: took 4m0.000648161s for pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace to be "Ready" ...
	E1216 21:04:05.813448   60829 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace to be "Ready" (will not retry!)
	I1216 21:04:05.813472   60829 pod_ready.go:39] duration metric: took 4m14.577422135s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:04:05.813498   60829 kubeadm.go:597] duration metric: took 4m22.010606819s to restartPrimaryControlPlane
	W1216 21:04:05.813559   60829 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 21:04:05.813593   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 21:04:10.315541   60933 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.85286561s)
	I1216 21:04:10.315622   60933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:04:10.330937   60933 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 21:04:10.343702   60933 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:04:10.356498   60933 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:04:10.356526   60933 kubeadm.go:157] found existing configuration files:
	
	I1216 21:04:10.356579   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 21:04:10.367777   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:04:10.367847   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:04:10.379109   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 21:04:10.389258   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:04:10.389313   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:04:10.399959   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 21:04:10.410664   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:04:10.410734   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:04:10.423138   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 21:04:10.433922   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:04:10.433976   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:04:10.445297   60933 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 21:04:10.524236   60933 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1216 21:04:10.524344   60933 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 21:04:10.680331   60933 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 21:04:10.680489   60933 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 21:04:10.680641   60933 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1216 21:04:10.877305   60933 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 21:04:10.879375   60933 out.go:235]   - Generating certificates and keys ...
	I1216 21:04:10.879496   60933 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 21:04:10.879567   60933 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 21:04:10.879647   60933 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 21:04:10.879748   60933 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 21:04:10.879865   60933 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 21:04:10.880127   60933 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 21:04:10.881047   60933 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 21:04:10.881874   60933 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 21:04:10.882778   60933 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 21:04:10.883678   60933 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 21:04:10.884029   60933 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 21:04:10.884130   60933 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 21:04:11.034011   60933 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 21:04:11.273509   60933 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 21:04:11.477553   60933 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 21:04:11.542158   60933 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 21:04:11.565791   60933 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 21:04:11.567317   60933 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 21:04:11.567409   60933 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 21:04:11.763223   60933 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 21:04:08.955135   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:10.957061   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:11.766107   60933 out.go:235]   - Booting up control plane ...
	I1216 21:04:11.766257   60933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 21:04:11.766367   60933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 21:04:11.768484   60933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 21:04:11.773601   60933 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 21:04:11.780554   60933 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1216 21:04:11.436931   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:13.437532   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:13.455175   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:15.455370   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:17.456801   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:15.936107   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:17.937233   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:17.949449   60421 pod_ready.go:82] duration metric: took 4m0.000885381s for pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace to be "Ready" ...
	E1216 21:04:17.949484   60421 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace to be "Ready" (will not retry!)
	I1216 21:04:17.949501   60421 pod_ready.go:39] duration metric: took 4m10.554596731s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:04:17.949525   60421 kubeadm.go:597] duration metric: took 4m42.414672113s to restartPrimaryControlPlane
	W1216 21:04:17.949588   60421 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 21:04:17.949619   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 21:04:19.938104   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:22.436710   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:24.936550   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:26.936809   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:29.437478   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:33.833179   60829 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (28.019561403s)
	I1216 21:04:33.833265   60829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:04:33.850170   60829 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 21:04:33.862112   60829 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:04:33.873752   60829 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:04:33.873777   60829 kubeadm.go:157] found existing configuration files:
	
	I1216 21:04:33.873832   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1216 21:04:33.885038   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:04:33.885115   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:04:33.897352   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1216 21:04:33.911055   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:04:33.911115   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:04:33.925077   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1216 21:04:33.938925   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:04:33.938997   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:04:33.952022   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1216 21:04:33.963099   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:04:33.963176   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:04:33.974080   60829 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 21:04:34.031525   60829 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I1216 21:04:34.031643   60829 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 21:04:34.153173   60829 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 21:04:34.153340   60829 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 21:04:34.153453   60829 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 21:04:34.166258   60829 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 21:04:31.936620   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:33.938157   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:34.168275   60829 out.go:235]   - Generating certificates and keys ...
	I1216 21:04:34.168388   60829 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 21:04:34.168463   60829 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 21:04:34.168545   60829 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 21:04:34.168633   60829 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 21:04:34.168740   60829 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 21:04:34.168837   60829 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 21:04:34.168934   60829 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 21:04:34.169020   60829 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 21:04:34.169119   60829 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 21:04:34.169222   60829 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 21:04:34.169278   60829 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 21:04:34.169365   60829 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 21:04:34.277660   60829 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 21:04:34.526364   60829 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 21:04:34.629728   60829 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 21:04:34.757824   60829 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 21:04:34.838922   60829 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 21:04:34.839431   60829 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 21:04:34.841925   60829 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 21:04:34.843761   60829 out.go:235]   - Booting up control plane ...
	I1216 21:04:34.843874   60829 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 21:04:34.843945   60829 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 21:04:34.846919   60829 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 21:04:34.866038   60829 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 21:04:34.875031   60829 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 21:04:34.875112   60829 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 21:04:35.016713   60829 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 21:04:35.016879   60829 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 21:04:36.437043   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:38.437584   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:36.017947   60829 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001159452s
	I1216 21:04:36.018086   60829 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1216 21:04:40.519460   60829 kubeadm.go:310] [api-check] The API server is healthy after 4.501460025s
	I1216 21:04:40.533680   60829 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 21:04:40.552611   60829 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 21:04:40.585691   60829 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 21:04:40.585905   60829 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-327790 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 21:04:40.613752   60829 kubeadm.go:310] [bootstrap-token] Using token: w829op.p4bszg1q76emsxit
	I1216 21:04:40.615428   60829 out.go:235]   - Configuring RBAC rules ...
	I1216 21:04:40.615556   60829 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 21:04:40.629296   60829 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 21:04:40.638449   60829 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 21:04:40.644143   60829 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 21:04:40.648665   60829 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 21:04:40.653151   60829 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 21:04:40.926399   60829 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 21:04:41.370569   60829 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1216 21:04:41.927555   60829 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1216 21:04:41.928692   60829 kubeadm.go:310] 
	I1216 21:04:41.928769   60829 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1216 21:04:41.928779   60829 kubeadm.go:310] 
	I1216 21:04:41.928851   60829 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1216 21:04:41.928878   60829 kubeadm.go:310] 
	I1216 21:04:41.928928   60829 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1216 21:04:41.929005   60829 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 21:04:41.929053   60829 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 21:04:41.929060   60829 kubeadm.go:310] 
	I1216 21:04:41.929107   60829 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1216 21:04:41.929114   60829 kubeadm.go:310] 
	I1216 21:04:41.929151   60829 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 21:04:41.929157   60829 kubeadm.go:310] 
	I1216 21:04:41.929205   60829 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1216 21:04:41.929264   60829 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 21:04:41.929325   60829 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 21:04:41.929354   60829 kubeadm.go:310] 
	I1216 21:04:41.929527   60829 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 21:04:41.929657   60829 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1216 21:04:41.929674   60829 kubeadm.go:310] 
	I1216 21:04:41.929787   60829 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token w829op.p4bszg1q76emsxit \
	I1216 21:04:41.929941   60829 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e03b60b144334bf383a3d22daeca854a6b4004373f1847ba3afcb85a998b5735 \
	I1216 21:04:41.929975   60829 kubeadm.go:310] 	--control-plane 
	I1216 21:04:41.929984   60829 kubeadm.go:310] 
	I1216 21:04:41.930103   60829 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1216 21:04:41.930124   60829 kubeadm.go:310] 
	I1216 21:04:41.930245   60829 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token w829op.p4bszg1q76emsxit \
	I1216 21:04:41.930378   60829 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e03b60b144334bf383a3d22daeca854a6b4004373f1847ba3afcb85a998b5735 
	I1216 21:04:41.931554   60829 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 21:04:41.931685   60829 cni.go:84] Creating CNI manager for ""
	I1216 21:04:41.931699   60829 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:04:41.933748   60829 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 21:04:40.937882   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:43.436864   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:41.935317   60829 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 21:04:41.947502   60829 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 21:04:41.976180   60829 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 21:04:41.976288   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:41.976323   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-327790 minikube.k8s.io/updated_at=2024_12_16T21_04_41_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=74e51ab701402ddc00f8ba70f2a2775c7dcd6477 minikube.k8s.io/name=default-k8s-diff-port-327790 minikube.k8s.io/primary=true
	I1216 21:04:42.010154   60829 ops.go:34] apiserver oom_adj: -16
	I1216 21:04:42.181919   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:42.682201   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:43.182557   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:43.682418   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:44.182318   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:44.682793   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:45.182342   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:45.682678   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:45.777484   60829 kubeadm.go:1113] duration metric: took 3.801254961s to wait for elevateKubeSystemPrivileges
	I1216 21:04:45.777522   60829 kubeadm.go:394] duration metric: took 5m2.030533321s to StartCluster
	I1216 21:04:45.777543   60829 settings.go:142] acquiring lock: {Name:mke62e1d1fa6bfae09410847a3fc6f95d0bbbd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:04:45.777644   60829 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 21:04:45.780034   60829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/kubeconfig: {Name:mk67073c6dc9abd712825d4490d6430745897f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:04:45.780369   60829 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.162 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 21:04:45.780450   60829 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 21:04:45.780566   60829 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-327790"
	I1216 21:04:45.780579   60829 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-327790"
	I1216 21:04:45.780595   60829 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-327790"
	W1216 21:04:45.780606   60829 addons.go:243] addon storage-provisioner should already be in state true
	I1216 21:04:45.780599   60829 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-327790"
	I1216 21:04:45.780609   60829 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-327790"
	I1216 21:04:45.780627   60829 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-327790"
	I1216 21:04:45.780627   60829 config.go:182] Loaded profile config "default-k8s-diff-port-327790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	W1216 21:04:45.780638   60829 addons.go:243] addon metrics-server should already be in state true
	I1216 21:04:45.780648   60829 host.go:66] Checking if "default-k8s-diff-port-327790" exists ...
	I1216 21:04:45.780675   60829 host.go:66] Checking if "default-k8s-diff-port-327790" exists ...
	I1216 21:04:45.781091   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:45.781091   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:45.781132   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:45.781136   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:45.781174   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:45.781137   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:45.782022   60829 out.go:177] * Verifying Kubernetes components...
	I1216 21:04:45.783549   60829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 21:04:45.799326   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42295
	I1216 21:04:45.799443   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36833
	I1216 21:04:45.799865   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:45.800491   60829 main.go:141] libmachine: Using API Version  1
	I1216 21:04:45.800510   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:45.800588   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:45.801082   60829 main.go:141] libmachine: Using API Version  1
	I1216 21:04:45.801102   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:45.801178   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37413
	I1216 21:04:45.801202   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:45.801517   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:45.801539   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:45.801707   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetState
	I1216 21:04:45.801925   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:45.801959   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:45.801974   60829 main.go:141] libmachine: Using API Version  1
	I1216 21:04:45.801992   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:45.802319   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:45.802817   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:45.802857   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:45.805750   60829 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-327790"
	W1216 21:04:45.805775   60829 addons.go:243] addon default-storageclass should already be in state true
	I1216 21:04:45.805806   60829 host.go:66] Checking if "default-k8s-diff-port-327790" exists ...
	I1216 21:04:45.806153   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:45.806193   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:45.820545   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46261
	I1216 21:04:45.821062   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:45.821598   60829 main.go:141] libmachine: Using API Version  1
	I1216 21:04:45.821625   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:45.822086   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:45.822294   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetState
	I1216 21:04:45.823995   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 21:04:45.824775   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40091
	I1216 21:04:45.825269   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:45.825754   60829 main.go:141] libmachine: Using API Version  1
	I1216 21:04:45.825778   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:45.826117   60829 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1216 21:04:45.826158   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:45.826843   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:45.826892   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:45.827527   60829 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1216 21:04:45.827557   60829 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1216 21:04:45.827577   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 21:04:45.829352   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34899
	I1216 21:04:45.829769   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:45.830197   60829 main.go:141] libmachine: Using API Version  1
	I1216 21:04:45.830217   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:45.830543   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:45.830767   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetState
	I1216 21:04:45.831413   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 21:04:45.832010   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 21:04:45.832030   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 21:04:45.832202   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 21:04:45.832424   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 21:04:45.832496   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 21:04:45.832847   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 21:04:45.833056   60829 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 21:04:45.834475   60829 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 21:04:45.835944   60829 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 21:04:45.835965   60829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 21:04:45.835983   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 21:04:45.839118   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 21:04:45.839533   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 21:04:45.839560   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 21:04:45.839744   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 21:04:45.839947   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 21:04:45.840087   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 21:04:45.840218   60829 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 21:04:45.845365   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37995
	I1216 21:04:45.845925   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:45.847042   60829 main.go:141] libmachine: Using API Version  1
	I1216 21:04:45.847060   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:45.847450   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:45.847669   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetState
	I1216 21:04:45.849934   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 21:04:45.850165   60829 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 21:04:45.850182   60829 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 21:04:45.850199   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 21:04:45.853083   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 21:04:45.853493   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 21:04:45.853518   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 21:04:45.853679   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 21:04:45.853848   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 21:04:45.854024   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 21:04:45.854177   60829 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 21:04:45.978935   60829 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 21:04:46.010601   60829 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-327790" to be "Ready" ...
	I1216 21:04:46.019674   60829 node_ready.go:49] node "default-k8s-diff-port-327790" has status "Ready":"True"
	I1216 21:04:46.019704   60829 node_ready.go:38] duration metric: took 9.066576ms for node "default-k8s-diff-port-327790" to be "Ready" ...
	I1216 21:04:46.019715   60829 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:04:46.033957   60829 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:46.103779   60829 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1216 21:04:46.103812   60829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1216 21:04:46.120299   60829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 21:04:46.171131   60829 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1216 21:04:46.171171   60829 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1216 21:04:46.171280   60829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 21:04:46.244556   60829 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 21:04:46.244587   60829 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1216 21:04:46.332646   60829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 21:04:47.461793   60829 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.34145582s)
	I1216 21:04:47.461871   60829 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.129193295s)
	I1216 21:04:47.461793   60829 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.290486436s)
	I1216 21:04:47.461899   60829 main.go:141] libmachine: Making call to close driver server
	I1216 21:04:47.461913   60829 main.go:141] libmachine: Making call to close driver server
	I1216 21:04:47.461918   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Close
	I1216 21:04:47.461875   60829 main.go:141] libmachine: Making call to close driver server
	I1216 21:04:47.461982   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Close
	I1216 21:04:47.461927   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Close
	I1216 21:04:47.462463   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Closing plugin on server side
	I1216 21:04:47.462469   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Closing plugin on server side
	I1216 21:04:47.462480   60829 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:04:47.462488   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Closing plugin on server side
	I1216 21:04:47.462494   60829 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:04:47.462504   60829 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:04:47.462506   60829 main.go:141] libmachine: Making call to close driver server
	I1216 21:04:47.462511   60829 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:04:47.462516   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Close
	I1216 21:04:47.462521   60829 main.go:141] libmachine: Making call to close driver server
	I1216 21:04:47.462529   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Close
	I1216 21:04:47.462556   60829 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:04:47.462573   60829 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:04:47.462581   60829 main.go:141] libmachine: Making call to close driver server
	I1216 21:04:47.462588   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Close
	I1216 21:04:47.462805   60829 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:04:47.462816   60829 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:04:47.462816   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Closing plugin on server side
	I1216 21:04:47.462827   60829 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-327790"
	I1216 21:04:47.462841   60829 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:04:47.462848   60829 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:04:47.463049   60829 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:04:47.463067   60829 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:04:47.524466   60829 main.go:141] libmachine: Making call to close driver server
	I1216 21:04:47.524497   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Close
	I1216 21:04:47.524822   60829 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:04:47.524843   60829 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:04:47.524869   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Closing plugin on server side
	I1216 21:04:47.526679   60829 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I1216 21:04:45.861404   60421 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.911759863s)
	I1216 21:04:45.861483   60421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:04:45.889560   60421 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 21:04:45.922090   60421 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:04:45.945227   60421 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:04:45.945261   60421 kubeadm.go:157] found existing configuration files:
	
	I1216 21:04:45.945306   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 21:04:45.960594   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:04:45.960666   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:04:45.980613   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 21:04:46.005349   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:04:46.005431   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:04:46.021683   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 21:04:46.032967   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:04:46.033042   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:04:46.064718   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 21:04:46.078736   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:04:46.078805   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:04:46.092798   60421 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 21:04:46.293434   60421 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 21:04:45.430910   60215 pod_ready.go:82] duration metric: took 4m0.000948437s for pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace to be "Ready" ...
	E1216 21:04:45.430950   60215 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace to be "Ready" (will not retry!)
	I1216 21:04:45.430970   60215 pod_ready.go:39] duration metric: took 4m12.926677248s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:04:45.431002   60215 kubeadm.go:597] duration metric: took 4m20.847109652s to restartPrimaryControlPlane
	W1216 21:04:45.431059   60215 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 21:04:45.431092   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 21:04:47.527909   60829 addons.go:510] duration metric: took 1.747463467s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I1216 21:04:48.047956   60829 pod_ready.go:103] pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:51.781856   60933 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1216 21:04:51.782285   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:04:51.782543   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:04:54.704462   60421 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I1216 21:04:54.704514   60421 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 21:04:54.704600   60421 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 21:04:54.704736   60421 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 21:04:54.704839   60421 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 21:04:54.704894   60421 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 21:04:54.706650   60421 out.go:235]   - Generating certificates and keys ...
	I1216 21:04:54.706771   60421 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 21:04:54.706865   60421 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 21:04:54.706999   60421 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 21:04:54.707113   60421 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 21:04:54.707256   60421 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 21:04:54.707344   60421 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 21:04:54.707478   60421 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 21:04:54.707573   60421 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 21:04:54.707683   60421 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 21:04:54.707806   60421 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 21:04:54.707851   60421 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 21:04:54.707902   60421 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 21:04:54.707968   60421 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 21:04:54.708056   60421 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 21:04:54.708127   60421 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 21:04:54.708225   60421 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 21:04:54.708305   60421 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 21:04:54.708427   60421 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 21:04:54.708526   60421 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 21:04:54.710014   60421 out.go:235]   - Booting up control plane ...
	I1216 21:04:54.710113   60421 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 21:04:54.710197   60421 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 21:04:54.710254   60421 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 21:04:54.710361   60421 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 21:04:54.710457   60421 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 21:04:54.710511   60421 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 21:04:54.710670   60421 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 21:04:54.710792   60421 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 21:04:54.710852   60421 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.532878ms
	I1216 21:04:54.710912   60421 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1216 21:04:54.710982   60421 kubeadm.go:310] [api-check] The API server is healthy after 5.50189872s
	I1216 21:04:54.711125   60421 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 21:04:54.711281   60421 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 21:04:54.711362   60421 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 21:04:54.711618   60421 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-232338 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 21:04:54.711712   60421 kubeadm.go:310] [bootstrap-token] Using token: knn1cl.i9horbjuutctjfyf
	I1216 21:04:54.714363   60421 out.go:235]   - Configuring RBAC rules ...
	I1216 21:04:54.714488   60421 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 21:04:54.714560   60421 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 21:04:54.714674   60421 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 21:04:54.714820   60421 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 21:04:54.714914   60421 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 21:04:54.714981   60421 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 21:04:54.715083   60421 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 21:04:54.715159   60421 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1216 21:04:54.715228   60421 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1216 21:04:54.715237   60421 kubeadm.go:310] 
	I1216 21:04:54.715345   60421 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1216 21:04:54.715359   60421 kubeadm.go:310] 
	I1216 21:04:54.715455   60421 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1216 21:04:54.715463   60421 kubeadm.go:310] 
	I1216 21:04:54.715510   60421 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1216 21:04:54.715596   60421 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 21:04:54.715669   60421 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 21:04:54.715679   60421 kubeadm.go:310] 
	I1216 21:04:54.715767   60421 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1216 21:04:54.715775   60421 kubeadm.go:310] 
	I1216 21:04:54.715842   60421 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 21:04:54.715851   60421 kubeadm.go:310] 
	I1216 21:04:54.715908   60421 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1216 21:04:54.715969   60421 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 21:04:54.716026   60421 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 21:04:54.716032   60421 kubeadm.go:310] 
	I1216 21:04:54.716106   60421 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 21:04:54.716171   60421 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1216 21:04:54.716177   60421 kubeadm.go:310] 
	I1216 21:04:54.716240   60421 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token knn1cl.i9horbjuutctjfyf \
	I1216 21:04:54.716340   60421 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e03b60b144334bf383a3d22daeca854a6b4004373f1847ba3afcb85a998b5735 \
	I1216 21:04:54.716374   60421 kubeadm.go:310] 	--control-plane 
	I1216 21:04:54.716384   60421 kubeadm.go:310] 
	I1216 21:04:54.716457   60421 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1216 21:04:54.716467   60421 kubeadm.go:310] 
	I1216 21:04:54.716534   60421 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token knn1cl.i9horbjuutctjfyf \
	I1216 21:04:54.716634   60421 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e03b60b144334bf383a3d22daeca854a6b4004373f1847ba3afcb85a998b5735 
	I1216 21:04:54.716644   60421 cni.go:84] Creating CNI manager for ""
	I1216 21:04:54.716651   60421 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:04:54.718260   60421 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 21:04:50.542207   60829 pod_ready.go:103] pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:52.542453   60829 pod_ready.go:103] pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:55.040960   60829 pod_ready.go:103] pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:56.042145   60829 pod_ready.go:93] pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:04:56.042175   60829 pod_ready.go:82] duration metric: took 10.008191514s for pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:56.042192   60829 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:56.047996   60829 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:04:56.048022   60829 pod_ready.go:82] duration metric: took 5.821217ms for pod "kube-apiserver-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:56.048031   60829 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:56.052582   60829 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:04:56.052608   60829 pod_ready.go:82] duration metric: took 4.569092ms for pod "kube-controller-manager-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:56.052619   60829 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:56.056805   60829 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:04:56.056834   60829 pod_ready.go:82] duration metric: took 4.206726ms for pod "kube-scheduler-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:56.056841   60829 pod_ready.go:39] duration metric: took 10.037112061s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:04:56.056855   60829 api_server.go:52] waiting for apiserver process to appear ...
	I1216 21:04:56.056904   60829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:04:56.076993   60829 api_server.go:72] duration metric: took 10.296578804s to wait for apiserver process to appear ...
	I1216 21:04:56.077023   60829 api_server.go:88] waiting for apiserver healthz status ...
	I1216 21:04:56.077045   60829 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1216 21:04:56.082250   60829 api_server.go:279] https://192.168.39.162:8444/healthz returned 200:
	ok
	I1216 21:04:56.083348   60829 api_server.go:141] control plane version: v1.32.0
	I1216 21:04:56.083369   60829 api_server.go:131] duration metric: took 6.339438ms to wait for apiserver health ...
	I1216 21:04:56.083377   60829 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 21:04:56.090255   60829 system_pods.go:59] 9 kube-system pods found
	I1216 21:04:56.090290   60829 system_pods.go:61] "coredns-668d6bf9bc-2qcfx" [4ac98efa-96ff-4564-93de-4a61de7a6507] Running
	I1216 21:04:56.090302   60829 system_pods.go:61] "coredns-668d6bf9bc-fb7wx" [f2f2c0e7-893f-45ba-8da9-3b03f5560d89] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:04:56.090310   60829 system_pods.go:61] "etcd-default-k8s-diff-port-327790" [5363e160-ef78-4737-89f9-5f4d0f0eab95] Running
	I1216 21:04:56.090318   60829 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-327790" [b53c6be6-e476-4a5a-80c2-96e701736820] Running
	I1216 21:04:56.090324   60829 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-327790" [57d8747a-7258-48c3-9bcd-6fedaa8b7431] Running
	I1216 21:04:56.090329   60829 system_pods.go:61] "kube-proxy-njqp8" [e5f1789d-b343-4c2e-b078-4a15f4b18569] Running
	I1216 21:04:56.090334   60829 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-327790" [e2303bbd-b9d9-4392-867f-6f5f43f74826] Running
	I1216 21:04:56.090342   60829 system_pods.go:61] "metrics-server-f79f97bbb-84xtf" [569c6717-dc12-474f-8156-d2dd9e410a54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:04:56.090349   60829 system_pods.go:61] "storage-provisioner" [4e5b12f0-3d96-4dd0-81e7-300b82058d47] Running
	I1216 21:04:56.090360   60829 system_pods.go:74] duration metric: took 6.975795ms to wait for pod list to return data ...
	I1216 21:04:56.090373   60829 default_sa.go:34] waiting for default service account to be created ...
	I1216 21:04:56.093967   60829 default_sa.go:45] found service account: "default"
	I1216 21:04:56.093998   60829 default_sa.go:55] duration metric: took 3.616693ms for default service account to be created ...
	I1216 21:04:56.094010   60829 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 21:04:56.241532   60829 system_pods.go:86] 9 kube-system pods found
	I1216 21:04:56.241568   60829 system_pods.go:89] "coredns-668d6bf9bc-2qcfx" [4ac98efa-96ff-4564-93de-4a61de7a6507] Running
	I1216 21:04:56.241582   60829 system_pods.go:89] "coredns-668d6bf9bc-fb7wx" [f2f2c0e7-893f-45ba-8da9-3b03f5560d89] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:04:56.241589   60829 system_pods.go:89] "etcd-default-k8s-diff-port-327790" [5363e160-ef78-4737-89f9-5f4d0f0eab95] Running
	I1216 21:04:56.241597   60829 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-327790" [b53c6be6-e476-4a5a-80c2-96e701736820] Running
	I1216 21:04:56.241605   60829 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-327790" [57d8747a-7258-48c3-9bcd-6fedaa8b7431] Running
	I1216 21:04:56.241611   60829 system_pods.go:89] "kube-proxy-njqp8" [e5f1789d-b343-4c2e-b078-4a15f4b18569] Running
	I1216 21:04:56.241617   60829 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-327790" [e2303bbd-b9d9-4392-867f-6f5f43f74826] Running
	I1216 21:04:56.241624   60829 system_pods.go:89] "metrics-server-f79f97bbb-84xtf" [569c6717-dc12-474f-8156-d2dd9e410a54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:04:56.241629   60829 system_pods.go:89] "storage-provisioner" [4e5b12f0-3d96-4dd0-81e7-300b82058d47] Running
	I1216 21:04:56.241639   60829 system_pods.go:126] duration metric: took 147.621114ms to wait for k8s-apps to be running ...
	I1216 21:04:56.241656   60829 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 21:04:56.241730   60829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:04:56.258891   60829 system_svc.go:56] duration metric: took 17.227056ms WaitForService to wait for kubelet
	I1216 21:04:56.258935   60829 kubeadm.go:582] duration metric: took 10.478521255s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 21:04:56.258962   60829 node_conditions.go:102] verifying NodePressure condition ...
	I1216 21:04:56.438641   60829 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 21:04:56.438667   60829 node_conditions.go:123] node cpu capacity is 2
	I1216 21:04:56.438679   60829 node_conditions.go:105] duration metric: took 179.711624ms to run NodePressure ...
	I1216 21:04:56.438692   60829 start.go:241] waiting for startup goroutines ...
	I1216 21:04:56.438700   60829 start.go:246] waiting for cluster config update ...
	I1216 21:04:56.438714   60829 start.go:255] writing updated cluster config ...
	I1216 21:04:56.438975   60829 ssh_runner.go:195] Run: rm -f paused
	I1216 21:04:56.490195   60829 start.go:600] kubectl: 1.32.0, cluster: 1.32.0 (minor skew: 0)
	I1216 21:04:56.492395   60829 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-327790" cluster and "default" namespace by default
	I1216 21:04:54.719483   60421 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 21:04:54.732035   60421 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 21:04:54.754010   60421 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 21:04:54.754122   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:54.754177   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-232338 minikube.k8s.io/updated_at=2024_12_16T21_04_54_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=74e51ab701402ddc00f8ba70f2a2775c7dcd6477 minikube.k8s.io/name=no-preload-232338 minikube.k8s.io/primary=true
	I1216 21:04:54.773008   60421 ops.go:34] apiserver oom_adj: -16
	I1216 21:04:55.009573   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:55.510039   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:56.009645   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:56.509608   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:57.009714   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:57.509902   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:58.009901   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:58.509631   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:58.632896   60421 kubeadm.go:1113] duration metric: took 3.878846316s to wait for elevateKubeSystemPrivileges
	I1216 21:04:58.632933   60421 kubeadm.go:394] duration metric: took 5m23.15322559s to StartCluster
	I1216 21:04:58.632951   60421 settings.go:142] acquiring lock: {Name:mke62e1d1fa6bfae09410847a3fc6f95d0bbbd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:04:58.633031   60421 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 21:04:58.635409   60421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/kubeconfig: {Name:mk67073c6dc9abd712825d4490d6430745897f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:04:58.635720   60421 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.240 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 21:04:58.635835   60421 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 21:04:58.635944   60421 addons.go:69] Setting storage-provisioner=true in profile "no-preload-232338"
	I1216 21:04:58.635958   60421 config.go:182] Loaded profile config "no-preload-232338": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 21:04:58.635966   60421 addons.go:234] Setting addon storage-provisioner=true in "no-preload-232338"
	I1216 21:04:58.635969   60421 addons.go:69] Setting default-storageclass=true in profile "no-preload-232338"
	W1216 21:04:58.635975   60421 addons.go:243] addon storage-provisioner should already be in state true
	I1216 21:04:58.635986   60421 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-232338"
	I1216 21:04:58.636005   60421 host.go:66] Checking if "no-preload-232338" exists ...
	I1216 21:04:58.635997   60421 addons.go:69] Setting metrics-server=true in profile "no-preload-232338"
	I1216 21:04:58.636029   60421 addons.go:234] Setting addon metrics-server=true in "no-preload-232338"
	W1216 21:04:58.636038   60421 addons.go:243] addon metrics-server should already be in state true
	I1216 21:04:58.636069   60421 host.go:66] Checking if "no-preload-232338" exists ...
	I1216 21:04:58.636428   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:58.636460   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:58.636428   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:58.636513   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:58.636532   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:58.636549   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:58.637558   60421 out.go:177] * Verifying Kubernetes components...
	I1216 21:04:58.639254   60421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 21:04:58.652770   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43305
	I1216 21:04:58.652789   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35093
	I1216 21:04:58.653247   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:58.653368   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:58.653818   60421 main.go:141] libmachine: Using API Version  1
	I1216 21:04:58.653836   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:58.653944   60421 main.go:141] libmachine: Using API Version  1
	I1216 21:04:58.653963   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:58.654562   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:58.654565   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:58.654775   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetState
	I1216 21:04:58.655078   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:58.655117   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:58.656383   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38087
	I1216 21:04:58.656987   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:58.657520   60421 main.go:141] libmachine: Using API Version  1
	I1216 21:04:58.657553   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:58.657933   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:58.658517   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:58.658566   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:58.658692   60421 addons.go:234] Setting addon default-storageclass=true in "no-preload-232338"
	W1216 21:04:58.658708   60421 addons.go:243] addon default-storageclass should already be in state true
	I1216 21:04:58.658737   60421 host.go:66] Checking if "no-preload-232338" exists ...
	I1216 21:04:58.659001   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:58.659043   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:58.672942   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34153
	I1216 21:04:58.673478   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:58.674034   60421 main.go:141] libmachine: Using API Version  1
	I1216 21:04:58.674063   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:58.674421   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:58.674594   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37541
	I1216 21:04:58.674614   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetState
	I1216 21:04:58.674994   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:58.675686   60421 main.go:141] libmachine: Using API Version  1
	I1216 21:04:58.675699   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:58.676334   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:58.676480   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 21:04:58.676898   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:58.676931   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:58.679230   60421 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1216 21:04:58.680032   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33309
	I1216 21:04:58.680609   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:58.680754   60421 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1216 21:04:58.680772   60421 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1216 21:04:58.680794   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 21:04:58.681202   60421 main.go:141] libmachine: Using API Version  1
	I1216 21:04:58.681221   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:58.681610   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:58.681815   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetState
	I1216 21:04:58.683608   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 21:04:58.684266   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 21:04:58.684765   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 21:04:58.684793   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 21:04:58.684925   60421 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 21:04:56.783069   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:04:56.783323   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:04:58.684932   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 21:04:58.685156   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 21:04:58.685321   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 21:04:58.685515   60421 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 21:04:58.686360   60421 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 21:04:58.686379   60421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 21:04:58.686396   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 21:04:58.689909   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 21:04:58.690365   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 21:04:58.690392   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 21:04:58.690698   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 21:04:58.690927   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 21:04:58.691095   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 21:04:58.691305   60421 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 21:04:58.695899   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36017
	I1216 21:04:58.696274   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:58.696758   60421 main.go:141] libmachine: Using API Version  1
	I1216 21:04:58.696777   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:58.697064   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:58.697225   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetState
	I1216 21:04:58.698530   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 21:04:58.698751   60421 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 21:04:58.698766   60421 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 21:04:58.698784   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 21:04:58.701986   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 21:04:58.702420   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 21:04:58.702473   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 21:04:58.702655   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 21:04:58.702839   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 21:04:58.702979   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 21:04:58.703197   60421 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 21:04:58.866115   60421 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 21:04:58.892287   60421 node_ready.go:35] waiting up to 6m0s for node "no-preload-232338" to be "Ready" ...
	I1216 21:04:58.949580   60421 node_ready.go:49] node "no-preload-232338" has status "Ready":"True"
	I1216 21:04:58.949610   60421 node_ready.go:38] duration metric: took 57.274849ms for node "no-preload-232338" to be "Ready" ...
	I1216 21:04:58.949622   60421 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:04:58.983955   60421 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-4wwvd" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:59.036124   60421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 21:04:59.039113   60421 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1216 21:04:59.039139   60421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1216 21:04:59.087493   60421 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1216 21:04:59.087531   60421 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1216 21:04:59.094976   60421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 21:04:59.129816   60421 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 21:04:59.129840   60421 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1216 21:04:59.236390   60421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 21:05:00.157688   60421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.121522553s)
	I1216 21:05:00.157736   60421 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:00.157751   60421 main.go:141] libmachine: (no-preload-232338) Calling .Close
	I1216 21:05:00.157764   60421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.06274536s)
	I1216 21:05:00.157830   60421 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:00.157851   60421 main.go:141] libmachine: (no-preload-232338) Calling .Close
	I1216 21:05:00.158259   60421 main.go:141] libmachine: (no-preload-232338) DBG | Closing plugin on server side
	I1216 21:05:00.158270   60421 main.go:141] libmachine: (no-preload-232338) DBG | Closing plugin on server side
	I1216 21:05:00.158282   60421 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:00.158288   60421 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:00.158297   60421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:00.158314   60421 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:00.158327   60421 main.go:141] libmachine: (no-preload-232338) Calling .Close
	I1216 21:05:00.158319   60421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:00.158344   60421 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:00.158352   60421 main.go:141] libmachine: (no-preload-232338) Calling .Close
	I1216 21:05:00.158589   60421 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:00.158604   60421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:00.158624   60421 main.go:141] libmachine: (no-preload-232338) DBG | Closing plugin on server side
	I1216 21:05:00.158589   60421 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:00.158655   60421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:00.182819   60421 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:00.182844   60421 main.go:141] libmachine: (no-preload-232338) Calling .Close
	I1216 21:05:00.183229   60421 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:00.183271   60421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:00.679810   60421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.44337328s)
	I1216 21:05:00.679867   60421 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:00.679880   60421 main.go:141] libmachine: (no-preload-232338) Calling .Close
	I1216 21:05:00.680233   60421 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:00.680254   60421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:00.680266   60421 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:00.680274   60421 main.go:141] libmachine: (no-preload-232338) Calling .Close
	I1216 21:05:00.680612   60421 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:00.680632   60421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:00.680643   60421 addons.go:475] Verifying addon metrics-server=true in "no-preload-232338"
	I1216 21:05:00.680659   60421 main.go:141] libmachine: (no-preload-232338) DBG | Closing plugin on server side
	I1216 21:05:00.682400   60421 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1216 21:05:00.684226   60421 addons.go:510] duration metric: took 2.048395371s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1216 21:05:00.997599   60421 pod_ready.go:103] pod "coredns-668d6bf9bc-4wwvd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:05:01.990706   60421 pod_ready.go:93] pod "coredns-668d6bf9bc-4wwvd" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:01.990733   60421 pod_ready.go:82] duration metric: took 3.006750411s for pod "coredns-668d6bf9bc-4wwvd" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:01.990742   60421 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-c4qfj" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:03.998055   60421 pod_ready.go:103] pod "coredns-668d6bf9bc-c4qfj" in "kube-system" namespace has status "Ready":"False"
	I1216 21:05:05.997310   60421 pod_ready.go:93] pod "coredns-668d6bf9bc-c4qfj" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:05.997334   60421 pod_ready.go:82] duration metric: took 4.006586503s for pod "coredns-668d6bf9bc-c4qfj" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:05.997346   60421 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.002576   60421 pod_ready.go:93] pod "etcd-no-preload-232338" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:06.002597   60421 pod_ready.go:82] duration metric: took 5.244238ms for pod "etcd-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.002607   60421 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.007407   60421 pod_ready.go:93] pod "kube-apiserver-no-preload-232338" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:06.007435   60421 pod_ready.go:82] duration metric: took 4.820838ms for pod "kube-apiserver-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.007449   60421 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.012239   60421 pod_ready.go:93] pod "kube-controller-manager-no-preload-232338" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:06.012263   60421 pod_ready.go:82] duration metric: took 4.806874ms for pod "kube-controller-manager-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.012273   60421 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-m5hq8" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.017087   60421 pod_ready.go:93] pod "kube-proxy-m5hq8" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:06.017111   60421 pod_ready.go:82] duration metric: took 4.830348ms for pod "kube-proxy-m5hq8" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.017124   60421 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.393947   60421 pod_ready.go:93] pod "kube-scheduler-no-preload-232338" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:06.393978   60421 pod_ready.go:82] duration metric: took 376.845934ms for pod "kube-scheduler-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.393989   60421 pod_ready.go:39] duration metric: took 7.444356073s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:05:06.394008   60421 api_server.go:52] waiting for apiserver process to appear ...
	I1216 21:05:06.394074   60421 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:05:06.410287   60421 api_server.go:72] duration metric: took 7.774519412s to wait for apiserver process to appear ...
	I1216 21:05:06.410327   60421 api_server.go:88] waiting for apiserver healthz status ...
	I1216 21:05:06.410363   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:05:06.415344   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 200:
	ok
	I1216 21:05:06.416302   60421 api_server.go:141] control plane version: v1.32.0
	I1216 21:05:06.416324   60421 api_server.go:131] duration metric: took 5.989768ms to wait for apiserver health ...
	I1216 21:05:06.416333   60421 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 21:05:06.598174   60421 system_pods.go:59] 9 kube-system pods found
	I1216 21:05:06.598205   60421 system_pods.go:61] "coredns-668d6bf9bc-4wwvd" [1c63ab10-dfdd-4aca-b39f-bc9b0e028e5e] Running
	I1216 21:05:06.598210   60421 system_pods.go:61] "coredns-668d6bf9bc-c4qfj" [b9bf3125-1e6d-4794-a2e6-2ff7ed5132b1] Running
	I1216 21:05:06.598214   60421 system_pods.go:61] "etcd-no-preload-232338" [5318f756-4c64-46be-b71b-94d53f48f0e9] Running
	I1216 21:05:06.598218   60421 system_pods.go:61] "kube-apiserver-no-preload-232338" [8d8fa68c-80ab-4747-a2ce-eeaff8847c29] Running
	I1216 21:05:06.598222   60421 system_pods.go:61] "kube-controller-manager-no-preload-232338" [8626806c-cd3f-488c-95c3-4b909878c1e4] Running
	I1216 21:05:06.598224   60421 system_pods.go:61] "kube-proxy-m5hq8" [ca0d357a-dda2-4508-a954-5c67eaf5b8ac] Running
	I1216 21:05:06.598229   60421 system_pods.go:61] "kube-scheduler-no-preload-232338" [8944107e-9e5c-474b-a0c1-9461e797a131] Running
	I1216 21:05:06.598236   60421 system_pods.go:61] "metrics-server-f79f97bbb-l7dcr" [fabafb40-1cb8-427b-88a6-37eeb6fd5b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:05:06.598240   60421 system_pods.go:61] "storage-provisioner" [3b742666-dfd4-4c9b-95a9-25367ec2a718] Running
	I1216 21:05:06.598248   60421 system_pods.go:74] duration metric: took 181.908567ms to wait for pod list to return data ...
	I1216 21:05:06.598255   60421 default_sa.go:34] waiting for default service account to be created ...
	I1216 21:05:06.794774   60421 default_sa.go:45] found service account: "default"
	I1216 21:05:06.794805   60421 default_sa.go:55] duration metric: took 196.542698ms for default service account to be created ...
	I1216 21:05:06.794823   60421 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 21:05:06.998297   60421 system_pods.go:86] 9 kube-system pods found
	I1216 21:05:06.998332   60421 system_pods.go:89] "coredns-668d6bf9bc-4wwvd" [1c63ab10-dfdd-4aca-b39f-bc9b0e028e5e] Running
	I1216 21:05:06.998341   60421 system_pods.go:89] "coredns-668d6bf9bc-c4qfj" [b9bf3125-1e6d-4794-a2e6-2ff7ed5132b1] Running
	I1216 21:05:06.998348   60421 system_pods.go:89] "etcd-no-preload-232338" [5318f756-4c64-46be-b71b-94d53f48f0e9] Running
	I1216 21:05:06.998354   60421 system_pods.go:89] "kube-apiserver-no-preload-232338" [8d8fa68c-80ab-4747-a2ce-eeaff8847c29] Running
	I1216 21:05:06.998359   60421 system_pods.go:89] "kube-controller-manager-no-preload-232338" [8626806c-cd3f-488c-95c3-4b909878c1e4] Running
	I1216 21:05:06.998364   60421 system_pods.go:89] "kube-proxy-m5hq8" [ca0d357a-dda2-4508-a954-5c67eaf5b8ac] Running
	I1216 21:05:06.998369   60421 system_pods.go:89] "kube-scheduler-no-preload-232338" [8944107e-9e5c-474b-a0c1-9461e797a131] Running
	I1216 21:05:06.998378   60421 system_pods.go:89] "metrics-server-f79f97bbb-l7dcr" [fabafb40-1cb8-427b-88a6-37eeb6fd5b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:05:06.998385   60421 system_pods.go:89] "storage-provisioner" [3b742666-dfd4-4c9b-95a9-25367ec2a718] Running
	I1216 21:05:06.998397   60421 system_pods.go:126] duration metric: took 203.564807ms to wait for k8s-apps to be running ...
	I1216 21:05:06.998407   60421 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 21:05:06.998457   60421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:05:07.014979   60421 system_svc.go:56] duration metric: took 16.561363ms WaitForService to wait for kubelet
	I1216 21:05:07.015013   60421 kubeadm.go:582] duration metric: took 8.379260538s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 21:05:07.015029   60421 node_conditions.go:102] verifying NodePressure condition ...
	I1216 21:05:07.195470   60421 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 21:05:07.195504   60421 node_conditions.go:123] node cpu capacity is 2
	I1216 21:05:07.195516   60421 node_conditions.go:105] duration metric: took 180.480949ms to run NodePressure ...
	I1216 21:05:07.195530   60421 start.go:241] waiting for startup goroutines ...
	I1216 21:05:07.195541   60421 start.go:246] waiting for cluster config update ...
	I1216 21:05:07.195554   60421 start.go:255] writing updated cluster config ...
	I1216 21:05:07.195857   60421 ssh_runner.go:195] Run: rm -f paused
	I1216 21:05:07.244442   60421 start.go:600] kubectl: 1.32.0, cluster: 1.32.0 (minor skew: 0)
	I1216 21:05:07.246788   60421 out.go:177] * Done! kubectl is now configured to use "no-preload-232338" cluster and "default" namespace by default
	I1216 21:05:06.784032   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:05:06.784224   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:05:13.066274   60215 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.635155592s)
	I1216 21:05:13.066379   60215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:05:13.096145   60215 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 21:05:13.109211   60215 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:05:13.125828   60215 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:05:13.125859   60215 kubeadm.go:157] found existing configuration files:
	
	I1216 21:05:13.125914   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 21:05:13.146982   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:05:13.147053   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:05:13.159382   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 21:05:13.176492   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:05:13.176572   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:05:13.190933   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 21:05:13.213230   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:05:13.213301   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:05:13.224631   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 21:05:13.234914   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:05:13.234975   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:05:13.245513   60215 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 21:05:13.300399   60215 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I1216 21:05:13.300491   60215 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 21:05:13.424114   60215 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 21:05:13.424252   60215 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 21:05:13.424372   60215 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 21:05:13.434507   60215 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 21:05:13.436710   60215 out.go:235]   - Generating certificates and keys ...
	I1216 21:05:13.436825   60215 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 21:05:13.436985   60215 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 21:05:13.437127   60215 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 21:05:13.437215   60215 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 21:05:13.437317   60215 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 21:05:13.437404   60215 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 21:05:13.437822   60215 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 21:05:13.438183   60215 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 21:05:13.438724   60215 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 21:05:13.439096   60215 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 21:05:13.439334   60215 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 21:05:13.439399   60215 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 21:05:13.528853   60215 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 21:05:13.700795   60215 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 21:05:13.890142   60215 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 21:05:14.166151   60215 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 21:05:14.310513   60215 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 21:05:14.311121   60215 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 21:05:14.317114   60215 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 21:05:14.319080   60215 out.go:235]   - Booting up control plane ...
	I1216 21:05:14.319218   60215 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 21:05:14.319332   60215 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 21:05:14.319518   60215 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 21:05:14.340394   60215 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 21:05:14.348443   60215 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 21:05:14.348533   60215 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 21:05:14.493244   60215 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 21:05:14.493456   60215 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 21:05:14.995210   60215 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.042805ms
	I1216 21:05:14.995325   60215 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1216 21:05:20.000911   60215 kubeadm.go:310] [api-check] The API server is healthy after 5.002773967s
	I1216 21:05:20.019851   60215 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 21:05:20.037375   60215 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 21:05:20.074003   60215 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 21:05:20.074237   60215 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-606219 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 21:05:20.087136   60215 kubeadm.go:310] [bootstrap-token] Using token: wev02f.lvhctqt9pq1agi1c
	I1216 21:05:20.088742   60215 out.go:235]   - Configuring RBAC rules ...
	I1216 21:05:20.088893   60215 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 21:05:20.094114   60215 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 21:05:20.101979   60215 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 21:05:20.105419   60215 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 21:05:20.112443   60215 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 21:05:20.116045   60215 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 21:05:20.406790   60215 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 21:05:20.844101   60215 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1216 21:05:21.414298   60215 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1216 21:05:21.414397   60215 kubeadm.go:310] 
	I1216 21:05:21.414488   60215 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1216 21:05:21.414504   60215 kubeadm.go:310] 
	I1216 21:05:21.414636   60215 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1216 21:05:21.414655   60215 kubeadm.go:310] 
	I1216 21:05:21.414694   60215 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1216 21:05:21.414796   60215 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 21:05:21.414866   60215 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 21:05:21.414877   60215 kubeadm.go:310] 
	I1216 21:05:21.414978   60215 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1216 21:05:21.415004   60215 kubeadm.go:310] 
	I1216 21:05:21.415071   60215 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 21:05:21.415080   60215 kubeadm.go:310] 
	I1216 21:05:21.415147   60215 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1216 21:05:21.415314   60215 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 21:05:21.415424   60215 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 21:05:21.415444   60215 kubeadm.go:310] 
	I1216 21:05:21.415568   60215 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 21:05:21.415674   60215 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1216 21:05:21.415690   60215 kubeadm.go:310] 
	I1216 21:05:21.415837   60215 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wev02f.lvhctqt9pq1agi1c \
	I1216 21:05:21.415982   60215 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e03b60b144334bf383a3d22daeca854a6b4004373f1847ba3afcb85a998b5735 \
	I1216 21:05:21.416023   60215 kubeadm.go:310] 	--control-plane 
	I1216 21:05:21.416033   60215 kubeadm.go:310] 
	I1216 21:05:21.416152   60215 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1216 21:05:21.416165   60215 kubeadm.go:310] 
	I1216 21:05:21.416295   60215 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wev02f.lvhctqt9pq1agi1c \
	I1216 21:05:21.416452   60215 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e03b60b144334bf383a3d22daeca854a6b4004373f1847ba3afcb85a998b5735 
	I1216 21:05:21.417157   60215 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 21:05:21.417251   60215 cni.go:84] Creating CNI manager for ""
	I1216 21:05:21.417265   60215 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:05:21.418899   60215 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 21:05:21.420240   60215 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 21:05:21.438639   60215 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 21:05:21.470443   60215 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 21:05:21.470525   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:21.470552   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-606219 minikube.k8s.io/updated_at=2024_12_16T21_05_21_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=74e51ab701402ddc00f8ba70f2a2775c7dcd6477 minikube.k8s.io/name=embed-certs-606219 minikube.k8s.io/primary=true
	I1216 21:05:21.721162   60215 ops.go:34] apiserver oom_adj: -16
	I1216 21:05:21.721292   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:22.221634   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:22.722431   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:23.221436   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:23.721948   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:24.222009   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:24.722203   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:24.835684   60215 kubeadm.go:1113] duration metric: took 3.36522517s to wait for elevateKubeSystemPrivileges
	I1216 21:05:24.835729   60215 kubeadm.go:394] duration metric: took 5m0.316036708s to StartCluster
	I1216 21:05:24.835751   60215 settings.go:142] acquiring lock: {Name:mke62e1d1fa6bfae09410847a3fc6f95d0bbbd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:05:24.835847   60215 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 21:05:24.838279   60215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/kubeconfig: {Name:mk67073c6dc9abd712825d4490d6430745897f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:05:24.838580   60215 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.151 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 21:05:24.838625   60215 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 21:05:24.838747   60215 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-606219"
	I1216 21:05:24.838768   60215 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-606219"
	W1216 21:05:24.838789   60215 addons.go:243] addon storage-provisioner should already be in state true
	I1216 21:05:24.838816   60215 config.go:182] Loaded profile config "embed-certs-606219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 21:05:24.838825   60215 addons.go:69] Setting default-storageclass=true in profile "embed-certs-606219"
	I1216 21:05:24.838832   60215 addons.go:69] Setting metrics-server=true in profile "embed-certs-606219"
	I1216 21:05:24.838846   60215 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-606219"
	I1216 21:05:24.838822   60215 host.go:66] Checking if "embed-certs-606219" exists ...
	I1216 21:05:24.838848   60215 addons.go:234] Setting addon metrics-server=true in "embed-certs-606219"
	W1216 21:05:24.838945   60215 addons.go:243] addon metrics-server should already be in state true
	I1216 21:05:24.838965   60215 host.go:66] Checking if "embed-certs-606219" exists ...
	I1216 21:05:24.839285   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:05:24.839292   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:05:24.839331   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:05:24.839364   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:05:24.839415   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:05:24.839496   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:05:24.843833   60215 out.go:177] * Verifying Kubernetes components...
	I1216 21:05:24.845341   60215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 21:05:24.857648   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39513
	I1216 21:05:24.858457   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:05:24.859021   60215 main.go:141] libmachine: Using API Version  1
	I1216 21:05:24.859037   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:05:24.861356   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36663
	I1216 21:05:24.861406   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44685
	I1216 21:05:24.861357   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:05:24.861844   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:05:24.862150   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:05:24.862188   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:05:24.862315   60215 main.go:141] libmachine: Using API Version  1
	I1216 21:05:24.862334   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:05:24.862334   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:05:24.862661   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:05:24.862876   60215 main.go:141] libmachine: Using API Version  1
	I1216 21:05:24.862894   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:05:24.863171   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:05:24.863200   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:05:24.863634   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:05:24.863964   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetState
	I1216 21:05:24.867371   60215 addons.go:234] Setting addon default-storageclass=true in "embed-certs-606219"
	W1216 21:05:24.867392   60215 addons.go:243] addon default-storageclass should already be in state true
	I1216 21:05:24.867419   60215 host.go:66] Checking if "embed-certs-606219" exists ...
	I1216 21:05:24.867758   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:05:24.867801   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:05:24.884243   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35999
	I1216 21:05:24.884680   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:05:24.885282   60215 main.go:141] libmachine: Using API Version  1
	I1216 21:05:24.885304   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:05:24.885380   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36799
	I1216 21:05:24.885657   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:05:24.885730   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:05:24.885934   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetState
	I1216 21:05:24.886191   60215 main.go:141] libmachine: Using API Version  1
	I1216 21:05:24.886202   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:05:24.886473   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:05:24.886831   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:05:24.886853   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:05:24.887935   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:05:24.890092   60215 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1216 21:05:24.891395   60215 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1216 21:05:24.891413   60215 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1216 21:05:24.891441   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:05:24.894367   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46739
	I1216 21:05:24.894926   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:05:24.895551   60215 main.go:141] libmachine: Using API Version  1
	I1216 21:05:24.895570   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:05:24.895832   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:05:24.896148   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:05:24.896382   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetState
	I1216 21:05:24.896501   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:05:24.896523   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:05:24.897136   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:05:24.897327   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:05:24.897507   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:05:24.897673   60215 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 21:05:24.898101   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:05:24.900061   60215 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 21:05:24.901390   60215 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 21:05:24.901412   60215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 21:05:24.901432   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:05:24.904063   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:05:24.904403   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:05:24.904421   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:05:24.904617   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:05:24.904828   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:05:24.904969   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:05:24.905117   60215 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 21:05:24.907518   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32915
	I1216 21:05:24.907890   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:05:24.908349   60215 main.go:141] libmachine: Using API Version  1
	I1216 21:05:24.908362   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:05:24.908615   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:05:24.908793   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetState
	I1216 21:05:24.910349   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:05:24.910557   60215 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 21:05:24.910590   60215 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 21:05:24.910623   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:05:24.913163   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:05:24.913546   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:05:24.913628   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:05:24.913971   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:05:24.914156   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:05:24.914402   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:05:24.914562   60215 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 21:05:25.054773   60215 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 21:05:25.077692   60215 node_ready.go:35] waiting up to 6m0s for node "embed-certs-606219" to be "Ready" ...
	I1216 21:05:25.085592   60215 node_ready.go:49] node "embed-certs-606219" has status "Ready":"True"
	I1216 21:05:25.085618   60215 node_ready.go:38] duration metric: took 7.893359ms for node "embed-certs-606219" to be "Ready" ...
	I1216 21:05:25.085630   60215 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:05:25.092073   60215 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:25.160890   60215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 21:05:25.171950   60215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 21:05:25.174517   60215 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1216 21:05:25.174540   60215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1216 21:05:25.201386   60215 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1216 21:05:25.201415   60215 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1216 21:05:25.279568   60215 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 21:05:25.279599   60215 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1216 21:05:25.316528   60215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 21:05:25.944495   60215 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:25.944521   60215 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:25.944529   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Close
	I1216 21:05:25.944533   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Close
	I1216 21:05:25.944816   60215 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:25.944835   60215 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:25.944845   60215 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:25.944855   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Close
	I1216 21:05:25.944855   60215 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:25.944869   60215 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:25.944876   60215 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:25.944888   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Close
	I1216 21:05:25.944817   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Closing plugin on server side
	I1216 21:05:25.945069   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Closing plugin on server side
	I1216 21:05:25.945131   60215 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:25.945147   60215 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:25.945168   60215 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:25.945173   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Closing plugin on server side
	I1216 21:05:25.945218   60215 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:25.961427   60215 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:25.961449   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Close
	I1216 21:05:25.961729   60215 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:25.961743   60215 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:26.745600   60215 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.429029698s)
	I1216 21:05:26.745665   60215 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:26.745678   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Close
	I1216 21:05:26.746097   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Closing plugin on server side
	I1216 21:05:26.746115   60215 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:26.746128   60215 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:26.746142   60215 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:26.746151   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Close
	I1216 21:05:26.746429   60215 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:26.746446   60215 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:26.746457   60215 addons.go:475] Verifying addon metrics-server=true in "embed-certs-606219"
	I1216 21:05:26.746480   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Closing plugin on server side
	I1216 21:05:26.748859   60215 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1216 21:05:26.785021   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:05:26.785309   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:05:26.750502   60215 addons.go:510] duration metric: took 1.911885721s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1216 21:05:27.124629   60215 pod_ready.go:103] pod "etcd-embed-certs-606219" in "kube-system" namespace has status "Ready":"False"
	I1216 21:05:28.100607   60215 pod_ready.go:93] pod "etcd-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:28.100642   60215 pod_ready.go:82] duration metric: took 3.008540123s for pod "etcd-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:28.100654   60215 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:28.107620   60215 pod_ready.go:93] pod "kube-apiserver-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:28.107649   60215 pod_ready.go:82] duration metric: took 6.986126ms for pod "kube-apiserver-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:28.107661   60215 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:30.114012   60215 pod_ready.go:103] pod "kube-controller-manager-embed-certs-606219" in "kube-system" namespace has status "Ready":"False"
	I1216 21:05:31.116704   60215 pod_ready.go:93] pod "kube-controller-manager-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:31.116738   60215 pod_ready.go:82] duration metric: took 3.009069732s for pod "kube-controller-manager-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:31.116752   60215 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:31.122043   60215 pod_ready.go:93] pod "kube-scheduler-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:31.122079   60215 pod_ready.go:82] duration metric: took 5.318248ms for pod "kube-scheduler-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:31.122089   60215 pod_ready.go:39] duration metric: took 6.036446164s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:05:31.122107   60215 api_server.go:52] waiting for apiserver process to appear ...
	I1216 21:05:31.122167   60215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:05:31.140854   60215 api_server.go:72] duration metric: took 6.302233923s to wait for apiserver process to appear ...
	I1216 21:05:31.140887   60215 api_server.go:88] waiting for apiserver healthz status ...
	I1216 21:05:31.140910   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:05:31.146080   60215 api_server.go:279] https://192.168.61.151:8443/healthz returned 200:
	ok
	I1216 21:05:31.147076   60215 api_server.go:141] control plane version: v1.32.0
	I1216 21:05:31.147107   60215 api_server.go:131] duration metric: took 6.2056ms to wait for apiserver health ...
	I1216 21:05:31.147115   60215 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 21:05:31.152598   60215 system_pods.go:59] 9 kube-system pods found
	I1216 21:05:31.152627   60215 system_pods.go:61] "coredns-668d6bf9bc-5c74p" [ef8e73b6-150f-47cc-9df9-dcf983e5bd6e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:05:31.152634   60215 system_pods.go:61] "coredns-668d6bf9bc-xhdlz" [c1b5b585-f005-4885-9809-60f60e03bf04] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:05:31.152640   60215 system_pods.go:61] "etcd-embed-certs-606219" [f5595ee4-23f3-4227-8e25-8679fd2dc722] Running
	I1216 21:05:31.152643   60215 system_pods.go:61] "kube-apiserver-embed-certs-606219" [be11ba17-ecee-47c1-a4bd-329e0e705369] Running
	I1216 21:05:31.152647   60215 system_pods.go:61] "kube-controller-manager-embed-certs-606219" [21210597-d4d5-4cab-9a24-2d9f702f682d] Running
	I1216 21:05:31.152652   60215 system_pods.go:61] "kube-proxy-677x9" [37810520-4f02-46c4-8eeb-6dc70c859e3e] Running
	I1216 21:05:31.152655   60215 system_pods.go:61] "kube-scheduler-embed-certs-606219" [5a39f42d-b727-4acd-bd39-ae1c56a5b725] Running
	I1216 21:05:31.152659   60215 system_pods.go:61] "metrics-server-f79f97bbb-6fxnl" [828f2925-402c-4f49-89e1-354e082c0de4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:05:31.152662   60215 system_pods.go:61] "storage-provisioner" [6437bd61-690b-498d-b35c-e2ef4eb5be97] Running
	I1216 21:05:31.152669   60215 system_pods.go:74] duration metric: took 5.548798ms to wait for pod list to return data ...
	I1216 21:05:31.152675   60215 default_sa.go:34] waiting for default service account to be created ...
	I1216 21:05:31.155444   60215 default_sa.go:45] found service account: "default"
	I1216 21:05:31.155469   60215 default_sa.go:55] duration metric: took 2.788897ms for default service account to be created ...
	I1216 21:05:31.155477   60215 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 21:05:31.160520   60215 system_pods.go:86] 9 kube-system pods found
	I1216 21:05:31.160548   60215 system_pods.go:89] "coredns-668d6bf9bc-5c74p" [ef8e73b6-150f-47cc-9df9-dcf983e5bd6e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:05:31.160555   60215 system_pods.go:89] "coredns-668d6bf9bc-xhdlz" [c1b5b585-f005-4885-9809-60f60e03bf04] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:05:31.160561   60215 system_pods.go:89] "etcd-embed-certs-606219" [f5595ee4-23f3-4227-8e25-8679fd2dc722] Running
	I1216 21:05:31.160565   60215 system_pods.go:89] "kube-apiserver-embed-certs-606219" [be11ba17-ecee-47c1-a4bd-329e0e705369] Running
	I1216 21:05:31.160569   60215 system_pods.go:89] "kube-controller-manager-embed-certs-606219" [21210597-d4d5-4cab-9a24-2d9f702f682d] Running
	I1216 21:05:31.160573   60215 system_pods.go:89] "kube-proxy-677x9" [37810520-4f02-46c4-8eeb-6dc70c859e3e] Running
	I1216 21:05:31.160576   60215 system_pods.go:89] "kube-scheduler-embed-certs-606219" [5a39f42d-b727-4acd-bd39-ae1c56a5b725] Running
	I1216 21:05:31.160580   60215 system_pods.go:89] "metrics-server-f79f97bbb-6fxnl" [828f2925-402c-4f49-89e1-354e082c0de4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:05:31.160584   60215 system_pods.go:89] "storage-provisioner" [6437bd61-690b-498d-b35c-e2ef4eb5be97] Running
	I1216 21:05:31.160591   60215 system_pods.go:126] duration metric: took 5.109359ms to wait for k8s-apps to be running ...
	I1216 21:05:31.160597   60215 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 21:05:31.160637   60215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:05:31.177182   60215 system_svc.go:56] duration metric: took 16.575484ms WaitForService to wait for kubelet
	I1216 21:05:31.177216   60215 kubeadm.go:582] duration metric: took 6.33860089s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 21:05:31.177239   60215 node_conditions.go:102] verifying NodePressure condition ...
	I1216 21:05:31.180614   60215 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 21:05:31.180635   60215 node_conditions.go:123] node cpu capacity is 2
	I1216 21:05:31.180645   60215 node_conditions.go:105] duration metric: took 3.400617ms to run NodePressure ...
	I1216 21:05:31.180656   60215 start.go:241] waiting for startup goroutines ...
	I1216 21:05:31.180667   60215 start.go:246] waiting for cluster config update ...
	I1216 21:05:31.180684   60215 start.go:255] writing updated cluster config ...
	I1216 21:05:31.180960   60215 ssh_runner.go:195] Run: rm -f paused
	I1216 21:05:31.232404   60215 start.go:600] kubectl: 1.32.0, cluster: 1.32.0 (minor skew: 0)
	I1216 21:05:31.234366   60215 out.go:177] * Done! kubectl is now configured to use "embed-certs-606219" cluster and "default" namespace by default
	I1216 21:06:06.787417   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:06:06.787673   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:06:06.787700   60933 kubeadm.go:310] 
	I1216 21:06:06.787779   60933 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1216 21:06:06.787849   60933 kubeadm.go:310] 		timed out waiting for the condition
	I1216 21:06:06.787864   60933 kubeadm.go:310] 
	I1216 21:06:06.787894   60933 kubeadm.go:310] 	This error is likely caused by:
	I1216 21:06:06.787944   60933 kubeadm.go:310] 		- The kubelet is not running
	I1216 21:06:06.788115   60933 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 21:06:06.788131   60933 kubeadm.go:310] 
	I1216 21:06:06.788238   60933 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 21:06:06.788270   60933 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1216 21:06:06.788328   60933 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1216 21:06:06.788346   60933 kubeadm.go:310] 
	I1216 21:06:06.788492   60933 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1216 21:06:06.788568   60933 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1216 21:06:06.788575   60933 kubeadm.go:310] 
	I1216 21:06:06.788706   60933 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1216 21:06:06.788914   60933 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1216 21:06:06.789052   60933 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1216 21:06:06.789150   60933 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1216 21:06:06.789160   60933 kubeadm.go:310] 
	I1216 21:06:06.789970   60933 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 21:06:06.790084   60933 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1216 21:06:06.790222   60933 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1216 21:06:06.790376   60933 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 21:06:06.790430   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 21:06:07.272336   60933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:06:07.288881   60933 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:06:07.303411   60933 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:06:07.303437   60933 kubeadm.go:157] found existing configuration files:
	
	I1216 21:06:07.303486   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 21:06:07.314605   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:06:07.314675   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:06:07.326523   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 21:06:07.336506   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:06:07.336587   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:06:07.347505   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 21:06:07.357743   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:06:07.357799   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:06:07.368251   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 21:06:07.378296   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:06:07.378366   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:06:07.390625   60933 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 21:06:07.461800   60933 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1216 21:06:07.461911   60933 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 21:06:07.607467   60933 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 21:06:07.607664   60933 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 21:06:07.607821   60933 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1216 21:06:07.821429   60933 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 21:06:07.823617   60933 out.go:235]   - Generating certificates and keys ...
	I1216 21:06:07.823728   60933 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 21:06:07.823826   60933 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 21:06:07.823970   60933 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 21:06:07.824066   60933 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 21:06:07.824191   60933 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 21:06:07.824281   60933 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 21:06:07.824374   60933 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 21:06:07.824452   60933 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 21:06:07.824529   60933 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 21:06:07.824634   60933 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 21:06:07.824728   60933 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 21:06:07.824826   60933 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 21:06:08.070481   60933 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 21:06:08.416182   60933 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 21:06:08.472848   60933 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 21:06:08.528700   60933 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 21:06:08.551528   60933 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 21:06:08.552215   60933 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 21:06:08.552299   60933 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 21:06:08.702187   60933 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 21:06:08.704170   60933 out.go:235]   - Booting up control plane ...
	I1216 21:06:08.704286   60933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 21:06:08.721205   60933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 21:06:08.722619   60933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 21:06:08.724289   60933 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 21:06:08.726457   60933 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1216 21:06:48.729045   60933 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1216 21:06:48.729713   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:06:48.730028   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:06:53.730648   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:06:53.730870   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:07:03.731670   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:07:03.731904   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:07:23.733276   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:07:23.733489   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:08:03.734439   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:08:03.734730   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:08:03.734768   60933 kubeadm.go:310] 
	I1216 21:08:03.734831   60933 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1216 21:08:03.734902   60933 kubeadm.go:310] 		timed out waiting for the condition
	I1216 21:08:03.734917   60933 kubeadm.go:310] 
	I1216 21:08:03.734966   60933 kubeadm.go:310] 	This error is likely caused by:
	I1216 21:08:03.735003   60933 kubeadm.go:310] 		- The kubelet is not running
	I1216 21:08:03.735094   60933 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 21:08:03.735104   60933 kubeadm.go:310] 
	I1216 21:08:03.735260   60933 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 21:08:03.735325   60933 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1216 21:08:03.735353   60933 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1216 21:08:03.735359   60933 kubeadm.go:310] 
	I1216 21:08:03.735486   60933 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1216 21:08:03.735604   60933 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1216 21:08:03.735614   60933 kubeadm.go:310] 
	I1216 21:08:03.735757   60933 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1216 21:08:03.735880   60933 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1216 21:08:03.735986   60933 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1216 21:08:03.736096   60933 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1216 21:08:03.736107   60933 kubeadm.go:310] 
	I1216 21:08:03.736944   60933 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 21:08:03.737145   60933 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1216 21:08:03.737211   60933 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1216 21:08:03.737287   60933 kubeadm.go:394] duration metric: took 7m57.891196073s to StartCluster
	I1216 21:08:03.737346   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:08:03.737417   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:08:03.789377   60933 cri.go:89] found id: ""
	I1216 21:08:03.789412   60933 logs.go:282] 0 containers: []
	W1216 21:08:03.789421   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:08:03.789426   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:08:03.789477   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:08:03.831122   60933 cri.go:89] found id: ""
	I1216 21:08:03.831150   60933 logs.go:282] 0 containers: []
	W1216 21:08:03.831161   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:08:03.831167   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:08:03.831236   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:08:03.870598   60933 cri.go:89] found id: ""
	I1216 21:08:03.870625   60933 logs.go:282] 0 containers: []
	W1216 21:08:03.870634   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:08:03.870640   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:08:03.870695   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:08:03.909060   60933 cri.go:89] found id: ""
	I1216 21:08:03.909095   60933 logs.go:282] 0 containers: []
	W1216 21:08:03.909103   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:08:03.909109   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:08:03.909163   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:08:03.946925   60933 cri.go:89] found id: ""
	I1216 21:08:03.946954   60933 logs.go:282] 0 containers: []
	W1216 21:08:03.946962   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:08:03.946968   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:08:03.947038   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:08:03.985596   60933 cri.go:89] found id: ""
	I1216 21:08:03.985629   60933 logs.go:282] 0 containers: []
	W1216 21:08:03.985650   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:08:03.985670   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:08:03.985736   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:08:04.022504   60933 cri.go:89] found id: ""
	I1216 21:08:04.022530   60933 logs.go:282] 0 containers: []
	W1216 21:08:04.022538   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:08:04.022545   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:08:04.022608   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:08:04.075636   60933 cri.go:89] found id: ""
	I1216 21:08:04.075667   60933 logs.go:282] 0 containers: []
	W1216 21:08:04.075677   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:08:04.075688   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:08:04.075707   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:08:04.180622   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:08:04.180653   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:08:04.180671   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:08:04.308091   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:08:04.308146   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:08:04.353240   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:08:04.353294   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:08:04.407919   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:08:04.407955   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1216 21:08:04.423583   60933 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1216 21:08:04.423644   60933 out.go:270] * 
	W1216 21:08:04.423727   60933 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 21:08:04.423749   60933 out.go:270] * 
	W1216 21:08:04.424576   60933 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 21:08:04.428361   60933 out.go:201] 
	W1216 21:08:04.429839   60933 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 21:08:04.429919   60933 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 21:08:04.429958   60933 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 21:08:04.431619   60933 out.go:201] 
	
	
	==> CRI-O <==
	Dec 16 21:13:58 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:13:58.568955430Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383638568929661,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8d3ac6cc-e3ce-4631-a5a1-5af462e5d6dd name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:13:58 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:13:58.569643845Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f7eac15b-d064-43f9-93c4-63377bebcd74 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:13:58 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:13:58.569712338Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f7eac15b-d064-43f9-93c4-63377bebcd74 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:13:58 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:13:58.569926247Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d0826a03ee32bda77ca97335013ae91f002f774efa4f77d0b0a3c75ab0f2fae,PodSandboxId:f96f4d5fc11834abc33de5566a9cef8bbd6a6e647645ce19a6dbf662504eb3f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734383088143458605,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e5b12f0-3d96-4dd0-81e7-300b82058d47,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b883d30dedb7ac8cf7ed4d5fbe42cf9c380af2fa2adc7837f5e1eb4f4286d56,PodSandboxId:87210bde75f5e5f2f8c0c6774deec425ad3f89c9206f0585eda832131db80ccc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383087775555413,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-2qcfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ac98efa-96ff-4564-93de-4a61de7a6507,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cbea0505ae515737f507676651e8308dd31354d4e2983604c7600ec4b698315,PodSandboxId:95cded2582e335ced9145c41ce9e157aef56a4e828f6c394a7ec52824df90347,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383087736159835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-fb7wx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: f2f2c0e7-893f-45ba-8da9-3b03f5560d89,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8024d16c768a1a12c74b2f6ef94acf5f68515049b2d934b644d99c6b2b9402ba,PodSandboxId:34cfaeb4337fc75cbee3a6fa49712b4ca1d8e595e989ef858c0cf60b220aae69,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING
,CreatedAt:1734383087535323304,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-njqp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5f1789d-b343-4c2e-b078-4a15f4b18569,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c389817bb05dee5083f0f85846c3e9cccf18b201795c52310482918e60e25df,PodSandboxId:1eb78ce645b2571eb5ade481309d52b02dbb60e7711b946f1e5c6e3986d92840,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1734383076271345315,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-327790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4161c8d3e913d5fb25c2915868fcc95f,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64cd514bcb576b70d0cc71be17b490af4580719763a31c11db97a3606c6a43f7,PodSandboxId:996c6adcce57d835833611f5660c05364fd46dde8d3597fd09fd1f56248554d1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1734383076151220888,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-327790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 540eb587b53eee7d2fdff2e59d720161,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6ba32c1db82f60d989fa33475fbc9acb149b3f07c73c7a5ef49e78ea656bd5f,PodSandboxId:3a803b97c066c51c4b72c4173806d11746ac47dc79cfdd01bc7aeab04d6d1db8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1734383076196769230,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-327790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638856e81522f218edf3c9f433e2fb12,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e575119c7ed0e6e978688998860bb47314a279b21cba2a1376c00f7be1f8d93,PodSandboxId:eff6ffd5c55a090d444b9d767f2456b349970302a3b48a211d3552511c1e2835,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1734383076132527737,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-327790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 041e70638e8ff674d941b5a1fa24cadc,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:404f75e4f0e84be27c459ae7b16952b6e5ca8cf8aacc77237c4bb2a68a91a662,PodSandboxId:9214074bc484ad26259b24c2cdb17ba7618684e2d71e0e316304ad2dce41a57f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1734382785972660069,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-327790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638856e81522f218edf3c9f433e2fb12,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f7eac15b-d064-43f9-93c4-63377bebcd74 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:13:58 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:13:58.611428463Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=adc3bc5e-5dd4-434c-832d-3d6c8fa07559 name=/runtime.v1.RuntimeService/Version
	Dec 16 21:13:58 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:13:58.611523387Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=adc3bc5e-5dd4-434c-832d-3d6c8fa07559 name=/runtime.v1.RuntimeService/Version
	Dec 16 21:13:58 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:13:58.612518165Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=759b13e4-1bb5-4206-87c9-14dd35d6f530 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:13:58 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:13:58.613031725Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383638613008715,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=759b13e4-1bb5-4206-87c9-14dd35d6f530 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:13:58 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:13:58.613525561Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33f6a0a2-0146-4a72-8e4b-8f1aeb3d9565 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:13:58 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:13:58.613627828Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33f6a0a2-0146-4a72-8e4b-8f1aeb3d9565 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:13:58 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:13:58.613834081Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d0826a03ee32bda77ca97335013ae91f002f774efa4f77d0b0a3c75ab0f2fae,PodSandboxId:f96f4d5fc11834abc33de5566a9cef8bbd6a6e647645ce19a6dbf662504eb3f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734383088143458605,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e5b12f0-3d96-4dd0-81e7-300b82058d47,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b883d30dedb7ac8cf7ed4d5fbe42cf9c380af2fa2adc7837f5e1eb4f4286d56,PodSandboxId:87210bde75f5e5f2f8c0c6774deec425ad3f89c9206f0585eda832131db80ccc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383087775555413,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-2qcfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ac98efa-96ff-4564-93de-4a61de7a6507,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cbea0505ae515737f507676651e8308dd31354d4e2983604c7600ec4b698315,PodSandboxId:95cded2582e335ced9145c41ce9e157aef56a4e828f6c394a7ec52824df90347,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383087736159835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-fb7wx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: f2f2c0e7-893f-45ba-8da9-3b03f5560d89,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8024d16c768a1a12c74b2f6ef94acf5f68515049b2d934b644d99c6b2b9402ba,PodSandboxId:34cfaeb4337fc75cbee3a6fa49712b4ca1d8e595e989ef858c0cf60b220aae69,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING
,CreatedAt:1734383087535323304,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-njqp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5f1789d-b343-4c2e-b078-4a15f4b18569,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c389817bb05dee5083f0f85846c3e9cccf18b201795c52310482918e60e25df,PodSandboxId:1eb78ce645b2571eb5ade481309d52b02dbb60e7711b946f1e5c6e3986d92840,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1734383076271345315,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-327790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4161c8d3e913d5fb25c2915868fcc95f,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64cd514bcb576b70d0cc71be17b490af4580719763a31c11db97a3606c6a43f7,PodSandboxId:996c6adcce57d835833611f5660c05364fd46dde8d3597fd09fd1f56248554d1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1734383076151220888,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-327790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 540eb587b53eee7d2fdff2e59d720161,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6ba32c1db82f60d989fa33475fbc9acb149b3f07c73c7a5ef49e78ea656bd5f,PodSandboxId:3a803b97c066c51c4b72c4173806d11746ac47dc79cfdd01bc7aeab04d6d1db8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1734383076196769230,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-327790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638856e81522f218edf3c9f433e2fb12,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e575119c7ed0e6e978688998860bb47314a279b21cba2a1376c00f7be1f8d93,PodSandboxId:eff6ffd5c55a090d444b9d767f2456b349970302a3b48a211d3552511c1e2835,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1734383076132527737,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-327790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 041e70638e8ff674d941b5a1fa24cadc,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:404f75e4f0e84be27c459ae7b16952b6e5ca8cf8aacc77237c4bb2a68a91a662,PodSandboxId:9214074bc484ad26259b24c2cdb17ba7618684e2d71e0e316304ad2dce41a57f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1734382785972660069,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-327790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638856e81522f218edf3c9f433e2fb12,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=33f6a0a2-0146-4a72-8e4b-8f1aeb3d9565 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:13:58 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:13:58.654228503Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2128b765-d51d-48fa-b336-3b9236806ef5 name=/runtime.v1.RuntimeService/Version
	Dec 16 21:13:58 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:13:58.654299610Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2128b765-d51d-48fa-b336-3b9236806ef5 name=/runtime.v1.RuntimeService/Version
	Dec 16 21:13:58 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:13:58.655366759Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=98f32108-f30e-4053-bbc0-1252bc4ec021 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:13:58 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:13:58.655904115Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383638655879189,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=98f32108-f30e-4053-bbc0-1252bc4ec021 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:13:58 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:13:58.656405104Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3218fa41-24c0-4f7f-861c-0c54bae5ad3c name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:13:58 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:13:58.656480873Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3218fa41-24c0-4f7f-861c-0c54bae5ad3c name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:13:58 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:13:58.656784159Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d0826a03ee32bda77ca97335013ae91f002f774efa4f77d0b0a3c75ab0f2fae,PodSandboxId:f96f4d5fc11834abc33de5566a9cef8bbd6a6e647645ce19a6dbf662504eb3f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734383088143458605,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e5b12f0-3d96-4dd0-81e7-300b82058d47,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b883d30dedb7ac8cf7ed4d5fbe42cf9c380af2fa2adc7837f5e1eb4f4286d56,PodSandboxId:87210bde75f5e5f2f8c0c6774deec425ad3f89c9206f0585eda832131db80ccc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383087775555413,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-2qcfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ac98efa-96ff-4564-93de-4a61de7a6507,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cbea0505ae515737f507676651e8308dd31354d4e2983604c7600ec4b698315,PodSandboxId:95cded2582e335ced9145c41ce9e157aef56a4e828f6c394a7ec52824df90347,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383087736159835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-fb7wx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: f2f2c0e7-893f-45ba-8da9-3b03f5560d89,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8024d16c768a1a12c74b2f6ef94acf5f68515049b2d934b644d99c6b2b9402ba,PodSandboxId:34cfaeb4337fc75cbee3a6fa49712b4ca1d8e595e989ef858c0cf60b220aae69,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING
,CreatedAt:1734383087535323304,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-njqp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5f1789d-b343-4c2e-b078-4a15f4b18569,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c389817bb05dee5083f0f85846c3e9cccf18b201795c52310482918e60e25df,PodSandboxId:1eb78ce645b2571eb5ade481309d52b02dbb60e7711b946f1e5c6e3986d92840,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1734383076271345315,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-327790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4161c8d3e913d5fb25c2915868fcc95f,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64cd514bcb576b70d0cc71be17b490af4580719763a31c11db97a3606c6a43f7,PodSandboxId:996c6adcce57d835833611f5660c05364fd46dde8d3597fd09fd1f56248554d1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1734383076151220888,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-327790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 540eb587b53eee7d2fdff2e59d720161,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6ba32c1db82f60d989fa33475fbc9acb149b3f07c73c7a5ef49e78ea656bd5f,PodSandboxId:3a803b97c066c51c4b72c4173806d11746ac47dc79cfdd01bc7aeab04d6d1db8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1734383076196769230,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-327790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638856e81522f218edf3c9f433e2fb12,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e575119c7ed0e6e978688998860bb47314a279b21cba2a1376c00f7be1f8d93,PodSandboxId:eff6ffd5c55a090d444b9d767f2456b349970302a3b48a211d3552511c1e2835,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1734383076132527737,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-327790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 041e70638e8ff674d941b5a1fa24cadc,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:404f75e4f0e84be27c459ae7b16952b6e5ca8cf8aacc77237c4bb2a68a91a662,PodSandboxId:9214074bc484ad26259b24c2cdb17ba7618684e2d71e0e316304ad2dce41a57f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1734382785972660069,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-327790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638856e81522f218edf3c9f433e2fb12,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3218fa41-24c0-4f7f-861c-0c54bae5ad3c name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:13:58 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:13:58.694846953Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5147e33e-0323-4e4f-b616-ce616cccf547 name=/runtime.v1.RuntimeService/Version
	Dec 16 21:13:58 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:13:58.694958572Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5147e33e-0323-4e4f-b616-ce616cccf547 name=/runtime.v1.RuntimeService/Version
	Dec 16 21:13:58 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:13:58.695956770Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7297f9b8-f8cd-4e2b-b950-08ec2e048706 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:13:58 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:13:58.696335656Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383638696316102,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7297f9b8-f8cd-4e2b-b950-08ec2e048706 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:13:58 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:13:58.697048757Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1dd1af48-ac25-4d09-8f93-4c2e8e401f7d name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:13:58 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:13:58.697118733Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1dd1af48-ac25-4d09-8f93-4c2e8e401f7d name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:13:58 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:13:58.697321058Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d0826a03ee32bda77ca97335013ae91f002f774efa4f77d0b0a3c75ab0f2fae,PodSandboxId:f96f4d5fc11834abc33de5566a9cef8bbd6a6e647645ce19a6dbf662504eb3f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734383088143458605,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e5b12f0-3d96-4dd0-81e7-300b82058d47,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b883d30dedb7ac8cf7ed4d5fbe42cf9c380af2fa2adc7837f5e1eb4f4286d56,PodSandboxId:87210bde75f5e5f2f8c0c6774deec425ad3f89c9206f0585eda832131db80ccc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383087775555413,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-2qcfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ac98efa-96ff-4564-93de-4a61de7a6507,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cbea0505ae515737f507676651e8308dd31354d4e2983604c7600ec4b698315,PodSandboxId:95cded2582e335ced9145c41ce9e157aef56a4e828f6c394a7ec52824df90347,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383087736159835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-fb7wx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: f2f2c0e7-893f-45ba-8da9-3b03f5560d89,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8024d16c768a1a12c74b2f6ef94acf5f68515049b2d934b644d99c6b2b9402ba,PodSandboxId:34cfaeb4337fc75cbee3a6fa49712b4ca1d8e595e989ef858c0cf60b220aae69,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING
,CreatedAt:1734383087535323304,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-njqp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5f1789d-b343-4c2e-b078-4a15f4b18569,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c389817bb05dee5083f0f85846c3e9cccf18b201795c52310482918e60e25df,PodSandboxId:1eb78ce645b2571eb5ade481309d52b02dbb60e7711b946f1e5c6e3986d92840,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1734383076271345315,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-327790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4161c8d3e913d5fb25c2915868fcc95f,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64cd514bcb576b70d0cc71be17b490af4580719763a31c11db97a3606c6a43f7,PodSandboxId:996c6adcce57d835833611f5660c05364fd46dde8d3597fd09fd1f56248554d1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1734383076151220888,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-327790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 540eb587b53eee7d2fdff2e59d720161,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6ba32c1db82f60d989fa33475fbc9acb149b3f07c73c7a5ef49e78ea656bd5f,PodSandboxId:3a803b97c066c51c4b72c4173806d11746ac47dc79cfdd01bc7aeab04d6d1db8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1734383076196769230,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-327790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638856e81522f218edf3c9f433e2fb12,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e575119c7ed0e6e978688998860bb47314a279b21cba2a1376c00f7be1f8d93,PodSandboxId:eff6ffd5c55a090d444b9d767f2456b349970302a3b48a211d3552511c1e2835,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1734383076132527737,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-327790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 041e70638e8ff674d941b5a1fa24cadc,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:404f75e4f0e84be27c459ae7b16952b6e5ca8cf8aacc77237c4bb2a68a91a662,PodSandboxId:9214074bc484ad26259b24c2cdb17ba7618684e2d71e0e316304ad2dce41a57f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1734382785972660069,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-327790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638856e81522f218edf3c9f433e2fb12,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1dd1af48-ac25-4d09-8f93-4c2e8e401f7d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4d0826a03ee32       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   f96f4d5fc1183       storage-provisioner
	8b883d30dedb7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   87210bde75f5e       coredns-668d6bf9bc-2qcfx
	1cbea0505ae51       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   95cded2582e33       coredns-668d6bf9bc-fb7wx
	8024d16c768a1       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   9 minutes ago       Running             kube-proxy                0                   34cfaeb4337fc       kube-proxy-njqp8
	7c389817bb05d       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   9 minutes ago       Running             etcd                      2                   1eb78ce645b25       etcd-default-k8s-diff-port-327790
	f6ba32c1db82f       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   9 minutes ago       Running             kube-apiserver            2                   3a803b97c066c       kube-apiserver-default-k8s-diff-port-327790
	64cd514bcb576       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   9 minutes ago       Running             kube-scheduler            2                   996c6adcce57d       kube-scheduler-default-k8s-diff-port-327790
	0e575119c7ed0       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   9 minutes ago       Running             kube-controller-manager   2                   eff6ffd5c55a0       kube-controller-manager-default-k8s-diff-port-327790
	404f75e4f0e84       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   14 minutes ago      Exited              kube-apiserver            1                   9214074bc484a       kube-apiserver-default-k8s-diff-port-327790
	
	
	==> coredns [1cbea0505ae515737f507676651e8308dd31354d4e2983604c7600ec4b698315] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [8b883d30dedb7ac8cf7ed4d5fbe42cf9c380af2fa2adc7837f5e1eb4f4286d56] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-327790
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-327790
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=74e51ab701402ddc00f8ba70f2a2775c7dcd6477
	                    minikube.k8s.io/name=default-k8s-diff-port-327790
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_16T21_04_41_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Dec 2024 21:04:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-327790
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Dec 2024 21:13:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Dec 2024 21:12:51 +0000   Mon, 16 Dec 2024 21:04:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Dec 2024 21:12:51 +0000   Mon, 16 Dec 2024 21:04:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Dec 2024 21:12:51 +0000   Mon, 16 Dec 2024 21:04:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Dec 2024 21:12:51 +0000   Mon, 16 Dec 2024 21:04:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.162
	  Hostname:    default-k8s-diff-port-327790
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5c304e91f28e48498b23e62d0abccc28
	  System UUID:                5c304e91-f28e-4849-8b23-e62d0abccc28
	  Boot ID:                    d8fa6d28-1be2-4bf3-9cb6-2881b7c2f2fd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-2qcfx                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m12s
	  kube-system                 coredns-668d6bf9bc-fb7wx                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m12s
	  kube-system                 etcd-default-k8s-diff-port-327790                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m17s
	  kube-system                 kube-apiserver-default-k8s-diff-port-327790             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-327790    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 kube-proxy-njqp8                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 kube-scheduler-default-k8s-diff-port-327790             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 metrics-server-f79f97bbb-84xtf                          100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m11s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m10s  kube-proxy       
	  Normal  Starting                 9m17s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m17s  kubelet          Node default-k8s-diff-port-327790 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m17s  kubelet          Node default-k8s-diff-port-327790 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m17s  kubelet          Node default-k8s-diff-port-327790 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m13s  node-controller  Node default-k8s-diff-port-327790 event: Registered Node default-k8s-diff-port-327790 in Controller
	
	
	==> dmesg <==
	[  +0.053113] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041934] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.050363] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.945888] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.637075] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.063997] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.058153] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059905] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.209206] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +0.170050] systemd-fstab-generator[687]: Ignoring "noauto" option for root device
	[  +0.387564] systemd-fstab-generator[716]: Ignoring "noauto" option for root device
	[  +4.689490] systemd-fstab-generator[808]: Ignoring "noauto" option for root device
	[  +0.061730] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.754898] systemd-fstab-generator[931]: Ignoring "noauto" option for root device
	[  +5.611970] kauditd_printk_skb: 97 callbacks suppressed
	[  +9.327600] kauditd_printk_skb: 90 callbacks suppressed
	[Dec16 21:04] kauditd_printk_skb: 4 callbacks suppressed
	[ +12.902544] systemd-fstab-generator[2716]: Ignoring "noauto" option for root device
	[  +4.661172] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.395475] systemd-fstab-generator[3049]: Ignoring "noauto" option for root device
	[  +4.894467] systemd-fstab-generator[3161]: Ignoring "noauto" option for root device
	[  +0.092903] kauditd_printk_skb: 14 callbacks suppressed
	[  +7.323443] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [7c389817bb05dee5083f0f85846c3e9cccf18b201795c52310482918e60e25df] <==
	{"level":"info","ts":"2024-12-16T21:04:36.787998Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-16T21:04:36.788320Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"95e2e907d4f1ad16","initial-advertise-peer-urls":["https://192.168.39.162:2380"],"listen-peer-urls":["https://192.168.39.162:2380"],"advertise-client-urls":["https://192.168.39.162:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.162:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-16T21:04:36.788374Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-16T21:04:36.788539Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.162:2380"}
	{"level":"info","ts":"2024-12-16T21:04:36.788561Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.162:2380"}
	{"level":"info","ts":"2024-12-16T21:04:37.053662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95e2e907d4f1ad16 is starting a new election at term 1"}
	{"level":"info","ts":"2024-12-16T21:04:37.053768Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95e2e907d4f1ad16 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-12-16T21:04:37.053812Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95e2e907d4f1ad16 received MsgPreVoteResp from 95e2e907d4f1ad16 at term 1"}
	{"level":"info","ts":"2024-12-16T21:04:37.053852Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95e2e907d4f1ad16 became candidate at term 2"}
	{"level":"info","ts":"2024-12-16T21:04:37.053878Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95e2e907d4f1ad16 received MsgVoteResp from 95e2e907d4f1ad16 at term 2"}
	{"level":"info","ts":"2024-12-16T21:04:37.053910Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95e2e907d4f1ad16 became leader at term 2"}
	{"level":"info","ts":"2024-12-16T21:04:37.053929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 95e2e907d4f1ad16 elected leader 95e2e907d4f1ad16 at term 2"}
	{"level":"info","ts":"2024-12-16T21:04:37.057893Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"95e2e907d4f1ad16","local-member-attributes":"{Name:default-k8s-diff-port-327790 ClientURLs:[https://192.168.39.162:2379]}","request-path":"/0/members/95e2e907d4f1ad16/attributes","cluster-id":"da8895e0fc3a6493","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-16T21:04:37.058125Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-16T21:04:37.058639Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-16T21:04:37.062627Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-16T21:04:37.062660Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-16T21:04:37.065235Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-16T21:04:37.062660Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-16T21:04:37.065935Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-16T21:04:37.069665Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da8895e0fc3a6493","local-member-id":"95e2e907d4f1ad16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-16T21:04:37.072768Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-16T21:04:37.074648Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-16T21:04:37.070031Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-16T21:04:37.075319Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.162:2379"}
	
	
	==> kernel <==
	 21:13:59 up 14 min,  0 users,  load average: 0.20, 0.25, 0.15
	Linux default-k8s-diff-port-327790 5.10.207 #1 SMP Thu Dec 12 23:38:00 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [404f75e4f0e84be27c459ae7b16952b6e5ca8cf8aacc77237c4bb2a68a91a662] <==
	W1216 21:04:31.963667       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.031736       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.061777       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.083171       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.092861       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.178706       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.186308       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.244924       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.253523       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.264310       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.303837       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.317695       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.356961       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.426977       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.433397       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.505645       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.506922       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.548087       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.548096       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.669498       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.675268       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.749855       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.892492       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.990922       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:33.037913       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [f6ba32c1db82f60d989fa33475fbc9acb149b3f07c73c7a5ef49e78ea656bd5f] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1216 21:09:39.664056       1 handler_proxy.go:99] no RequestInfo found in the context
	E1216 21:09:39.664183       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1216 21:09:39.665278       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1216 21:09:39.665345       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1216 21:10:39.666243       1 handler_proxy.go:99] no RequestInfo found in the context
	W1216 21:10:39.666243       1 handler_proxy.go:99] no RequestInfo found in the context
	E1216 21:10:39.666568       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1216 21:10:39.666695       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1216 21:10:39.667918       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1216 21:10:39.667988       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1216 21:12:39.668759       1 handler_proxy.go:99] no RequestInfo found in the context
	W1216 21:12:39.668754       1 handler_proxy.go:99] no RequestInfo found in the context
	E1216 21:12:39.669235       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1216 21:12:39.669361       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1216 21:12:39.670421       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1216 21:12:39.670541       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [0e575119c7ed0e6e978688998860bb47314a279b21cba2a1376c00f7be1f8d93] <==
	E1216 21:08:45.315695       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:08:45.344232       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:09:15.323134       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:09:15.353943       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:09:45.330259       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:09:45.363912       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:10:15.336437       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:10:15.372126       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:10:45.343323       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:10:45.381482       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1216 21:10:48.287122       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="327.463µs"
	I1216 21:10:59.286354       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="144.414µs"
	E1216 21:11:15.351197       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:11:15.393395       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:11:45.359189       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:11:45.406922       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:12:15.365221       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:12:15.414498       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:12:45.373829       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:12:45.426050       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1216 21:12:51.729291       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-327790"
	E1216 21:13:15.383004       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:13:15.438304       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:13:45.390401       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:13:45.449222       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [8024d16c768a1a12c74b2f6ef94acf5f68515049b2d934b644d99c6b2b9402ba] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1216 21:04:48.267337       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1216 21:04:48.295520       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.162"]
	E1216 21:04:48.295763       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 21:04:48.367248       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1216 21:04:48.367341       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1216 21:04:48.367379       1 server_linux.go:170] "Using iptables Proxier"
	I1216 21:04:48.370174       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 21:04:48.370555       1 server.go:497] "Version info" version="v1.32.0"
	I1216 21:04:48.370811       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 21:04:48.372421       1 config.go:199] "Starting service config controller"
	I1216 21:04:48.372867       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1216 21:04:48.373097       1 config.go:105] "Starting endpoint slice config controller"
	I1216 21:04:48.373132       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1216 21:04:48.373773       1 config.go:329] "Starting node config controller"
	I1216 21:04:48.373810       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1216 21:04:48.473727       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1216 21:04:48.474625       1 shared_informer.go:320] Caches are synced for service config
	I1216 21:04:48.474645       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [64cd514bcb576b70d0cc71be17b490af4580719763a31c11db97a3606c6a43f7] <==
	W1216 21:04:38.724310       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1216 21:04:38.724340       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 21:04:39.646332       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1216 21:04:39.646385       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1216 21:04:39.692848       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1216 21:04:39.692959       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 21:04:39.738198       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1216 21:04:39.738334       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 21:04:39.840361       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1216 21:04:39.840430       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 21:04:39.849893       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E1216 21:04:39.849948       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 21:04:39.983762       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1216 21:04:39.983888       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 21:04:40.025254       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1216 21:04:40.025290       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1216 21:04:40.059992       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1216 21:04:40.060059       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 21:04:40.110696       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1216 21:04:40.110749       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 21:04:40.117840       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1216 21:04:40.117874       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1216 21:04:40.122553       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1216 21:04:40.122647       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1216 21:04:42.611304       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 16 21:12:51 default-k8s-diff-port-327790 kubelet[3056]: E1216 21:12:51.496568    3056 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383571496092352,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:12:51 default-k8s-diff-port-327790 kubelet[3056]: E1216 21:12:51.496974    3056 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383571496092352,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:12:52 default-k8s-diff-port-327790 kubelet[3056]: E1216 21:12:52.265206    3056 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-84xtf" podUID="569c6717-dc12-474f-8156-d2dd9e410a54"
	Dec 16 21:13:01 default-k8s-diff-port-327790 kubelet[3056]: E1216 21:13:01.498725    3056 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383581498326931,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:13:01 default-k8s-diff-port-327790 kubelet[3056]: E1216 21:13:01.499065    3056 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383581498326931,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:13:07 default-k8s-diff-port-327790 kubelet[3056]: E1216 21:13:07.265952    3056 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-84xtf" podUID="569c6717-dc12-474f-8156-d2dd9e410a54"
	Dec 16 21:13:11 default-k8s-diff-port-327790 kubelet[3056]: E1216 21:13:11.500905    3056 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383591500428956,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:13:11 default-k8s-diff-port-327790 kubelet[3056]: E1216 21:13:11.500957    3056 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383591500428956,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:13:19 default-k8s-diff-port-327790 kubelet[3056]: E1216 21:13:19.265963    3056 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-84xtf" podUID="569c6717-dc12-474f-8156-d2dd9e410a54"
	Dec 16 21:13:21 default-k8s-diff-port-327790 kubelet[3056]: E1216 21:13:21.503026    3056 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383601502542376,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:13:21 default-k8s-diff-port-327790 kubelet[3056]: E1216 21:13:21.503060    3056 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383601502542376,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:13:31 default-k8s-diff-port-327790 kubelet[3056]: E1216 21:13:31.504912    3056 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383611504407193,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:13:31 default-k8s-diff-port-327790 kubelet[3056]: E1216 21:13:31.505178    3056 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383611504407193,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:13:33 default-k8s-diff-port-327790 kubelet[3056]: E1216 21:13:33.265794    3056 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-84xtf" podUID="569c6717-dc12-474f-8156-d2dd9e410a54"
	Dec 16 21:13:41 default-k8s-diff-port-327790 kubelet[3056]: E1216 21:13:41.281669    3056 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 16 21:13:41 default-k8s-diff-port-327790 kubelet[3056]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 16 21:13:41 default-k8s-diff-port-327790 kubelet[3056]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 16 21:13:41 default-k8s-diff-port-327790 kubelet[3056]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 16 21:13:41 default-k8s-diff-port-327790 kubelet[3056]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 16 21:13:41 default-k8s-diff-port-327790 kubelet[3056]: E1216 21:13:41.508461    3056 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383621507863767,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:13:41 default-k8s-diff-port-327790 kubelet[3056]: E1216 21:13:41.508504    3056 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383621507863767,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:13:45 default-k8s-diff-port-327790 kubelet[3056]: E1216 21:13:45.265144    3056 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-84xtf" podUID="569c6717-dc12-474f-8156-d2dd9e410a54"
	Dec 16 21:13:51 default-k8s-diff-port-327790 kubelet[3056]: E1216 21:13:51.511094    3056 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383631510517861,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:13:51 default-k8s-diff-port-327790 kubelet[3056]: E1216 21:13:51.511425    3056 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383631510517861,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:13:59 default-k8s-diff-port-327790 kubelet[3056]: E1216 21:13:59.266781    3056 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-84xtf" podUID="569c6717-dc12-474f-8156-d2dd9e410a54"
	
	
	==> storage-provisioner [4d0826a03ee32bda77ca97335013ae91f002f774efa4f77d0b0a3c75ab0f2fae] <==
	I1216 21:04:48.282175       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 21:04:48.302562       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 21:04:48.302894       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1216 21:04:48.317655       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 21:04:48.317838       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-327790_ffff1088-c911-478b-89ff-c07daeb971b7!
	I1216 21:04:48.319492       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4ec7988d-c1d8-4339-ae3a-872a45176971", APIVersion:"v1", ResourceVersion:"389", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-327790_ffff1088-c911-478b-89ff-c07daeb971b7 became leader
	I1216 21:04:48.431336       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-327790_ffff1088-c911-478b-89ff-c07daeb971b7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-327790 -n default-k8s-diff-port-327790
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-327790 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-84xtf
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-327790 describe pod metrics-server-f79f97bbb-84xtf
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-327790 describe pod metrics-server-f79f97bbb-84xtf: exit status 1 (62.346578ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-84xtf" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-327790 describe pod metrics-server-f79f97bbb-84xtf: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1216 21:05:16.961370   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-232338 -n no-preload-232338
start_stop_delete_test.go:272: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-12-16 21:14:07.807472154 +0000 UTC m=+5966.281119907
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-232338 -n no-preload-232338
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-232338 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-232338 logs -n 25: (2.154186918s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p stopped-upgrade-976873                              | stopped-upgrade-976873       | jenkins | v1.34.0 | 16 Dec 24 20:49 UTC | 16 Dec 24 20:50 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-560677                           | kubernetes-upgrade-560677    | jenkins | v1.34.0 | 16 Dec 24 20:50 UTC | 16 Dec 24 20:50 UTC |
	| start   | -p no-preload-232338                                   | no-preload-232338            | jenkins | v1.34.0 | 16 Dec 24 20:50 UTC | 16 Dec 24 20:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-976873                              | stopped-upgrade-976873       | jenkins | v1.34.0 | 16 Dec 24 20:50 UTC | 16 Dec 24 20:50 UTC |
	| start   | -p embed-certs-606219                                  | embed-certs-606219           | jenkins | v1.34.0 | 16 Dec 24 20:50 UTC | 16 Dec 24 20:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-270954                              | cert-expiration-270954       | jenkins | v1.34.0 | 16 Dec 24 20:51 UTC | 16 Dec 24 20:51 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-606219            | embed-certs-606219           | jenkins | v1.34.0 | 16 Dec 24 20:51 UTC | 16 Dec 24 20:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-606219                                  | embed-certs-606219           | jenkins | v1.34.0 | 16 Dec 24 20:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-270954                              | cert-expiration-270954       | jenkins | v1.34.0 | 16 Dec 24 20:51 UTC | 16 Dec 24 20:51 UTC |
	| delete  | -p                                                     | disable-driver-mounts-384008 | jenkins | v1.34.0 | 16 Dec 24 20:51 UTC | 16 Dec 24 20:51 UTC |
	|         | disable-driver-mounts-384008                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-327790 | jenkins | v1.34.0 | 16 Dec 24 20:51 UTC | 16 Dec 24 20:52 UTC |
	|         | default-k8s-diff-port-327790                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-232338             | no-preload-232338            | jenkins | v1.34.0 | 16 Dec 24 20:52 UTC | 16 Dec 24 20:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-232338                                   | no-preload-232338            | jenkins | v1.34.0 | 16 Dec 24 20:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-327790  | default-k8s-diff-port-327790 | jenkins | v1.34.0 | 16 Dec 24 20:52 UTC | 16 Dec 24 20:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-327790 | jenkins | v1.34.0 | 16 Dec 24 20:52 UTC |                     |
	|         | default-k8s-diff-port-327790                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-847766        | old-k8s-version-847766       | jenkins | v1.34.0 | 16 Dec 24 20:53 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-606219                 | embed-certs-606219           | jenkins | v1.34.0 | 16 Dec 24 20:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-606219                                  | embed-certs-606219           | jenkins | v1.34.0 | 16 Dec 24 20:54 UTC | 16 Dec 24 21:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-232338                  | no-preload-232338            | jenkins | v1.34.0 | 16 Dec 24 20:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-232338                                   | no-preload-232338            | jenkins | v1.34.0 | 16 Dec 24 20:54 UTC | 16 Dec 24 21:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-327790       | default-k8s-diff-port-327790 | jenkins | v1.34.0 | 16 Dec 24 20:55 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-847766                              | old-k8s-version-847766       | jenkins | v1.34.0 | 16 Dec 24 20:55 UTC | 16 Dec 24 20:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-327790 | jenkins | v1.34.0 | 16 Dec 24 20:55 UTC | 16 Dec 24 21:04 UTC |
	|         | default-k8s-diff-port-327790                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-847766             | old-k8s-version-847766       | jenkins | v1.34.0 | 16 Dec 24 20:55 UTC | 16 Dec 24 20:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-847766                              | old-k8s-version-847766       | jenkins | v1.34.0 | 16 Dec 24 20:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 20:55:34
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 20:55:34.390724   60933 out.go:345] Setting OutFile to fd 1 ...
	I1216 20:55:34.390973   60933 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:55:34.390982   60933 out.go:358] Setting ErrFile to fd 2...
	I1216 20:55:34.390986   60933 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:55:34.391166   60933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-7083/.minikube/bin
	I1216 20:55:34.391763   60933 out.go:352] Setting JSON to false
	I1216 20:55:34.392611   60933 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5879,"bootTime":1734376655,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 20:55:34.392675   60933 start.go:139] virtualization: kvm guest
	I1216 20:55:34.394822   60933 out.go:177] * [old-k8s-version-847766] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1216 20:55:34.396184   60933 notify.go:220] Checking for updates...
	I1216 20:55:34.396201   60933 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 20:55:34.397724   60933 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 20:55:34.399130   60933 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 20:55:34.400470   60933 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-7083/.minikube
	I1216 20:55:34.401934   60933 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 20:55:34.403341   60933 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 20:55:34.405179   60933 config.go:182] Loaded profile config "old-k8s-version-847766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1216 20:55:34.405571   60933 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:55:34.405650   60933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:55:34.421052   60933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41215
	I1216 20:55:34.421523   60933 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:55:34.422018   60933 main.go:141] libmachine: Using API Version  1
	I1216 20:55:34.422035   60933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:55:34.422373   60933 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:55:34.422646   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:55:34.424565   60933 out.go:177] * Kubernetes 1.32.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.0
	I1216 20:55:34.426088   60933 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 20:55:34.426419   60933 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:55:34.426474   60933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:55:34.441375   60933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36915
	I1216 20:55:34.441833   60933 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:55:34.442327   60933 main.go:141] libmachine: Using API Version  1
	I1216 20:55:34.442349   60933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:55:34.442658   60933 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:55:34.442852   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:55:34.480512   60933 out.go:177] * Using the kvm2 driver based on existing profile
	I1216 20:55:34.481972   60933 start.go:297] selected driver: kvm2
	I1216 20:55:34.481988   60933 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-847766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-847766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 20:55:34.482125   60933 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 20:55:34.482826   60933 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 20:55:34.482907   60933 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20091-7083/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1216 20:55:34.498561   60933 install.go:137] /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1216 20:55:34.498953   60933 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 20:55:34.498981   60933 cni.go:84] Creating CNI manager for ""
	I1216 20:55:34.499022   60933 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 20:55:34.499060   60933 start.go:340] cluster config:
	{Name:old-k8s-version-847766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-847766 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 20:55:34.499164   60933 iso.go:125] acquiring lock: {Name:mk60ed2ba7ed00047edacd09f4f6bf84214f0831 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 20:55:34.501128   60933 out.go:177] * Starting "old-k8s-version-847766" primary control-plane node in "old-k8s-version-847766" cluster
	I1216 20:55:29.827520   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:55:32.899553   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:55:30.468027   60829 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 20:55:30.468071   60829 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1216 20:55:30.468079   60829 cache.go:56] Caching tarball of preloaded images
	I1216 20:55:30.468192   60829 preload.go:172] Found /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 20:55:30.468206   60829 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1216 20:55:30.468310   60829 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/config.json ...
	I1216 20:55:30.468540   60829 start.go:360] acquireMachinesLock for default-k8s-diff-port-327790: {Name:mk014ce1133f8d018fee1f78c9c31a354da6dd77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 20:55:34.502579   60933 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1216 20:55:34.502609   60933 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1216 20:55:34.502615   60933 cache.go:56] Caching tarball of preloaded images
	I1216 20:55:34.502716   60933 preload.go:172] Found /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 20:55:34.502731   60933 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1216 20:55:34.502823   60933 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/config.json ...
	I1216 20:55:34.503011   60933 start.go:360] acquireMachinesLock for old-k8s-version-847766: {Name:mk014ce1133f8d018fee1f78c9c31a354da6dd77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 20:55:38.979556   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:55:42.051532   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:55:48.131588   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:55:51.203568   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:55:57.283622   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:00.355490   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:06.435543   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:09.507559   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:15.587526   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:18.659657   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:24.739528   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:27.811498   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:33.891518   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:36.963554   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:43.043553   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:46.115578   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:52.195583   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:55.267507   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:01.347591   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:04.419562   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:10.499479   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:13.571540   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:19.651541   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:22.723545   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:28.803551   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:31.875527   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:37.955563   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:41.027520   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:47.107494   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:50.179566   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:56.259550   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:59.331540   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:05.411562   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:08.483592   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:14.563574   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:17.635522   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:23.715548   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:26.787559   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:32.867539   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:35.939502   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:42.019562   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:45.091545   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:51.171521   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:54.243542   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:57.248710   60421 start.go:364] duration metric: took 4m14.403979547s to acquireMachinesLock for "no-preload-232338"
	I1216 20:58:57.248796   60421 start.go:96] Skipping create...Using existing machine configuration
	I1216 20:58:57.248804   60421 fix.go:54] fixHost starting: 
	I1216 20:58:57.249232   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:58:57.249288   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:58:57.264905   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39773
	I1216 20:58:57.265423   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:58:57.265982   60421 main.go:141] libmachine: Using API Version  1
	I1216 20:58:57.266005   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:58:57.266396   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:58:57.266636   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:58:57.266807   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetState
	I1216 20:58:57.268705   60421 fix.go:112] recreateIfNeeded on no-preload-232338: state=Stopped err=<nil>
	I1216 20:58:57.268730   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	W1216 20:58:57.268918   60421 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 20:58:57.270855   60421 out.go:177] * Restarting existing kvm2 VM for "no-preload-232338" ...
	I1216 20:58:57.272142   60421 main.go:141] libmachine: (no-preload-232338) Calling .Start
	I1216 20:58:57.272374   60421 main.go:141] libmachine: (no-preload-232338) Ensuring networks are active...
	I1216 20:58:57.273245   60421 main.go:141] libmachine: (no-preload-232338) Ensuring network default is active
	I1216 20:58:57.273660   60421 main.go:141] libmachine: (no-preload-232338) Ensuring network mk-no-preload-232338 is active
	I1216 20:58:57.274025   60421 main.go:141] libmachine: (no-preload-232338) Getting domain xml...
	I1216 20:58:57.274673   60421 main.go:141] libmachine: (no-preload-232338) Creating domain...
	I1216 20:58:57.245632   60215 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 20:58:57.245753   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetMachineName
	I1216 20:58:57.246111   60215 buildroot.go:166] provisioning hostname "embed-certs-606219"
	I1216 20:58:57.246149   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetMachineName
	I1216 20:58:57.246399   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 20:58:57.248517   60215 machine.go:96] duration metric: took 4m37.414570479s to provisionDockerMachine
	I1216 20:58:57.248579   60215 fix.go:56] duration metric: took 4m37.437232743s for fixHost
	I1216 20:58:57.248587   60215 start.go:83] releasing machines lock for "embed-certs-606219", held for 4m37.437262865s
	W1216 20:58:57.248614   60215 start.go:714] error starting host: provision: host is not running
	W1216 20:58:57.248791   60215 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1216 20:58:57.248801   60215 start.go:729] Will try again in 5 seconds ...
	I1216 20:58:58.506521   60421 main.go:141] libmachine: (no-preload-232338) Waiting to get IP...
	I1216 20:58:58.507302   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:58:58.507627   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:58:58.507699   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:58:58.507613   61660 retry.go:31] will retry after 230.281045ms: waiting for machine to come up
	I1216 20:58:58.739343   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:58:58.739781   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:58:58.739804   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:58:58.739741   61660 retry.go:31] will retry after 323.962271ms: waiting for machine to come up
	I1216 20:58:59.065388   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:58:59.065856   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:58:59.065884   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:58:59.065816   61660 retry.go:31] will retry after 364.058481ms: waiting for machine to come up
	I1216 20:58:59.431290   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:58:59.431680   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:58:59.431707   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:58:59.431631   61660 retry.go:31] will retry after 569.845721ms: waiting for machine to come up
	I1216 20:59:00.003562   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:00.004030   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:00.004093   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:00.003988   61660 retry.go:31] will retry after 728.729909ms: waiting for machine to come up
	I1216 20:59:00.733954   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:00.734450   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:00.734482   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:00.734388   61660 retry.go:31] will retry after 679.479889ms: waiting for machine to come up
	I1216 20:59:01.415289   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:01.415739   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:01.415763   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:01.415690   61660 retry.go:31] will retry after 1.136560245s: waiting for machine to come up
	I1216 20:59:02.554094   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:02.554523   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:02.554548   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:02.554470   61660 retry.go:31] will retry after 1.299578742s: waiting for machine to come up
	I1216 20:59:02.250499   60215 start.go:360] acquireMachinesLock for embed-certs-606219: {Name:mk014ce1133f8d018fee1f78c9c31a354da6dd77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 20:59:03.855999   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:03.856366   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:03.856399   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:03.856300   61660 retry.go:31] will retry after 1.761269163s: waiting for machine to come up
	I1216 20:59:05.620383   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:05.620837   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:05.620858   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:05.620818   61660 retry.go:31] will retry after 2.100894301s: waiting for machine to come up
	I1216 20:59:07.723931   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:07.724300   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:07.724322   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:07.724273   61660 retry.go:31] will retry after 2.57501483s: waiting for machine to come up
	I1216 20:59:10.302185   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:10.302766   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:10.302802   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:10.302706   61660 retry.go:31] will retry after 2.68456895s: waiting for machine to come up
	I1216 20:59:17.060397   60829 start.go:364] duration metric: took 3m46.591813882s to acquireMachinesLock for "default-k8s-diff-port-327790"
	I1216 20:59:17.060456   60829 start.go:96] Skipping create...Using existing machine configuration
	I1216 20:59:17.060462   60829 fix.go:54] fixHost starting: 
	I1216 20:59:17.060878   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:59:17.060935   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:59:17.079226   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41365
	I1216 20:59:17.079715   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:59:17.080173   60829 main.go:141] libmachine: Using API Version  1
	I1216 20:59:17.080202   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:59:17.080554   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:59:17.080731   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:59:17.080873   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetState
	I1216 20:59:17.082368   60829 fix.go:112] recreateIfNeeded on default-k8s-diff-port-327790: state=Stopped err=<nil>
	I1216 20:59:17.082399   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	W1216 20:59:17.082570   60829 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 20:59:17.085104   60829 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-327790" ...
	I1216 20:59:12.988787   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:12.989140   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:12.989172   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:12.989098   61660 retry.go:31] will retry after 2.793178881s: waiting for machine to come up
	I1216 20:59:15.786011   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.786518   60421 main.go:141] libmachine: (no-preload-232338) Found IP for machine: 192.168.50.240
	I1216 20:59:15.786540   60421 main.go:141] libmachine: (no-preload-232338) Reserving static IP address...
	I1216 20:59:15.786564   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has current primary IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.786948   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "no-preload-232338", mac: "52:54:00:07:00:29", ip: "192.168.50.240"} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:15.786983   60421 main.go:141] libmachine: (no-preload-232338) DBG | skip adding static IP to network mk-no-preload-232338 - found existing host DHCP lease matching {name: "no-preload-232338", mac: "52:54:00:07:00:29", ip: "192.168.50.240"}
	I1216 20:59:15.786995   60421 main.go:141] libmachine: (no-preload-232338) Reserved static IP address: 192.168.50.240
	I1216 20:59:15.787009   60421 main.go:141] libmachine: (no-preload-232338) Waiting for SSH to be available...
	I1216 20:59:15.787022   60421 main.go:141] libmachine: (no-preload-232338) DBG | Getting to WaitForSSH function...
	I1216 20:59:15.789175   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.789509   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:15.789542   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.789633   60421 main.go:141] libmachine: (no-preload-232338) DBG | Using SSH client type: external
	I1216 20:59:15.789674   60421 main.go:141] libmachine: (no-preload-232338) DBG | Using SSH private key: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa (-rw-------)
	I1216 20:59:15.789709   60421 main.go:141] libmachine: (no-preload-232338) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1216 20:59:15.789718   60421 main.go:141] libmachine: (no-preload-232338) DBG | About to run SSH command:
	I1216 20:59:15.789726   60421 main.go:141] libmachine: (no-preload-232338) DBG | exit 0
	I1216 20:59:15.915980   60421 main.go:141] libmachine: (no-preload-232338) DBG | SSH cmd err, output: <nil>: 
	I1216 20:59:15.916473   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetConfigRaw
	I1216 20:59:15.917088   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetIP
	I1216 20:59:15.919782   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.920161   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:15.920192   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.920408   60421 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/config.json ...
	I1216 20:59:15.920636   60421 machine.go:93] provisionDockerMachine start ...
	I1216 20:59:15.920654   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:59:15.920875   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:15.923221   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.923623   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:15.923650   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.923784   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:15.923971   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:15.924107   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:15.924246   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:15.924395   60421 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:15.924715   60421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.240 22 <nil> <nil>}
	I1216 20:59:15.924729   60421 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 20:59:16.032079   60421 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1216 20:59:16.032108   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetMachineName
	I1216 20:59:16.032397   60421 buildroot.go:166] provisioning hostname "no-preload-232338"
	I1216 20:59:16.032423   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetMachineName
	I1216 20:59:16.032649   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:16.035467   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.035798   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.035826   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.036003   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:16.036184   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.036335   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.036494   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:16.036679   60421 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:16.036847   60421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.240 22 <nil> <nil>}
	I1216 20:59:16.036859   60421 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-232338 && echo "no-preload-232338" | sudo tee /etc/hostname
	I1216 20:59:16.161958   60421 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-232338
	
	I1216 20:59:16.161996   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:16.164585   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.165086   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.165130   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.165370   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:16.165578   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.165746   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.165877   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:16.166015   60421 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:16.166188   60421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.240 22 <nil> <nil>}
	I1216 20:59:16.166204   60421 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-232338' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-232338/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-232338' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 20:59:16.285329   60421 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 20:59:16.285374   60421 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20091-7083/.minikube CaCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20091-7083/.minikube}
	I1216 20:59:16.285407   60421 buildroot.go:174] setting up certificates
	I1216 20:59:16.285422   60421 provision.go:84] configureAuth start
	I1216 20:59:16.285432   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetMachineName
	I1216 20:59:16.285764   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetIP
	I1216 20:59:16.288773   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.289161   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.289192   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.289405   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:16.291687   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.292042   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.292072   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.292190   60421 provision.go:143] copyHostCerts
	I1216 20:59:16.292260   60421 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem, removing ...
	I1216 20:59:16.292274   60421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem
	I1216 20:59:16.292343   60421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem (1123 bytes)
	I1216 20:59:16.292470   60421 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem, removing ...
	I1216 20:59:16.292481   60421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem
	I1216 20:59:16.292508   60421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem (1679 bytes)
	I1216 20:59:16.292563   60421 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem, removing ...
	I1216 20:59:16.292570   60421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem
	I1216 20:59:16.292590   60421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem (1082 bytes)
	I1216 20:59:16.292649   60421 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem org=jenkins.no-preload-232338 san=[127.0.0.1 192.168.50.240 localhost minikube no-preload-232338]
	I1216 20:59:16.407096   60421 provision.go:177] copyRemoteCerts
	I1216 20:59:16.407187   60421 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 20:59:16.407227   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:16.410400   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.410725   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.410755   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.410977   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:16.411188   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.411437   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:16.411618   60421 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 20:59:16.498456   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 20:59:16.525297   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 20:59:16.551135   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 20:59:16.576040   60421 provision.go:87] duration metric: took 290.601941ms to configureAuth
	I1216 20:59:16.576074   60421 buildroot.go:189] setting minikube options for container-runtime
	I1216 20:59:16.576288   60421 config.go:182] Loaded profile config "no-preload-232338": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 20:59:16.576396   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:16.579169   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.579607   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.579641   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.579795   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:16.580016   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.580165   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.580311   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:16.580467   60421 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:16.580629   60421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.240 22 <nil> <nil>}
	I1216 20:59:16.580643   60421 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 20:59:16.816973   60421 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 20:59:16.816998   60421 machine.go:96] duration metric: took 896.349056ms to provisionDockerMachine
	I1216 20:59:16.817010   60421 start.go:293] postStartSetup for "no-preload-232338" (driver="kvm2")
	I1216 20:59:16.817030   60421 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 20:59:16.817044   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:59:16.817427   60421 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 20:59:16.817454   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:16.820182   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.820550   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.820578   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.820713   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:16.820914   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.821096   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:16.821274   60421 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 20:59:16.906513   60421 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 20:59:16.911314   60421 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 20:59:16.911346   60421 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/addons for local assets ...
	I1216 20:59:16.911482   60421 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/files for local assets ...
	I1216 20:59:16.911589   60421 filesync.go:149] local asset: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem -> 142542.pem in /etc/ssl/certs
	I1216 20:59:16.911720   60421 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 20:59:16.921890   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /etc/ssl/certs/142542.pem (1708 bytes)
	I1216 20:59:16.947114   60421 start.go:296] duration metric: took 130.089628ms for postStartSetup
	I1216 20:59:16.947192   60421 fix.go:56] duration metric: took 19.698385497s for fixHost
	I1216 20:59:16.947229   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:16.950156   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.950543   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.950575   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.950780   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:16.950996   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.951199   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.951394   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:16.951604   60421 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:16.951829   60421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.240 22 <nil> <nil>}
	I1216 20:59:16.951843   60421 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 20:59:17.060233   60421 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734382757.032597424
	
	I1216 20:59:17.060258   60421 fix.go:216] guest clock: 1734382757.032597424
	I1216 20:59:17.060265   60421 fix.go:229] Guest: 2024-12-16 20:59:17.032597424 +0000 UTC Remote: 2024-12-16 20:59:16.947203535 +0000 UTC m=+274.247918927 (delta=85.393889ms)
	I1216 20:59:17.060290   60421 fix.go:200] guest clock delta is within tolerance: 85.393889ms
	I1216 20:59:17.060294   60421 start.go:83] releasing machines lock for "no-preload-232338", held for 19.811539815s
	I1216 20:59:17.060318   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:59:17.060636   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetIP
	I1216 20:59:17.063346   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:17.063742   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:17.063764   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:17.063900   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:59:17.064419   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:59:17.064647   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:59:17.064766   60421 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 20:59:17.064804   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:17.064897   60421 ssh_runner.go:195] Run: cat /version.json
	I1216 20:59:17.064923   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:17.067687   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:17.067897   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:17.068129   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:17.068166   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:17.068314   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:17.068318   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:17.068343   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:17.068491   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:17.068573   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:17.068754   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:17.068778   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:17.068914   60421 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 20:59:17.069085   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:17.069229   60421 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 20:59:17.149502   60421 ssh_runner.go:195] Run: systemctl --version
	I1216 20:59:17.184981   60421 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 20:59:17.335267   60421 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 20:59:17.344316   60421 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 20:59:17.344381   60421 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 20:59:17.362422   60421 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 20:59:17.362450   60421 start.go:495] detecting cgroup driver to use...
	I1216 20:59:17.362526   60421 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 20:59:17.379285   60421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 20:59:17.394451   60421 docker.go:217] disabling cri-docker service (if available) ...
	I1216 20:59:17.394514   60421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 20:59:17.411856   60421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 20:59:17.428028   60421 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 20:59:17.557602   60421 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 20:59:17.699140   60421 docker.go:233] disabling docker service ...
	I1216 20:59:17.699215   60421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 20:59:17.715236   60421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 20:59:17.729268   60421 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 20:59:17.875729   60421 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 20:59:18.007569   60421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 20:59:18.022940   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 20:59:18.042227   60421 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1216 20:59:18.042292   60421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:18.053011   60421 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 20:59:18.053081   60421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:18.063767   60421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:18.074262   60421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:18.085372   60421 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 20:59:18.098366   60421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:18.113619   60421 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:18.134081   60421 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:18.145276   60421 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 20:59:18.155733   60421 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 20:59:18.155806   60421 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 20:59:18.170492   60421 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 20:59:18.182276   60421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:59:18.291278   60421 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 20:59:18.384618   60421 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 20:59:18.384700   60421 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 20:59:18.390755   60421 start.go:563] Will wait 60s for crictl version
	I1216 20:59:18.390823   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:18.395435   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 20:59:18.439300   60421 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 20:59:18.439390   60421 ssh_runner.go:195] Run: crio --version
	I1216 20:59:18.473976   60421 ssh_runner.go:195] Run: crio --version
	I1216 20:59:18.505262   60421 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1216 20:59:17.086569   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Start
	I1216 20:59:17.086752   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Ensuring networks are active...
	I1216 20:59:17.087656   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Ensuring network default is active
	I1216 20:59:17.088082   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Ensuring network mk-default-k8s-diff-port-327790 is active
	I1216 20:59:17.088482   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Getting domain xml...
	I1216 20:59:17.089219   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Creating domain...
	I1216 20:59:18.413245   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting to get IP...
	I1216 20:59:18.414327   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:18.414794   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:18.414907   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:18.414784   61807 retry.go:31] will retry after 229.952775ms: waiting for machine to come up
	I1216 20:59:18.646270   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:18.646677   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:18.646727   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:18.646654   61807 retry.go:31] will retry after 341.342128ms: waiting for machine to come up
	I1216 20:59:18.989285   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:18.989781   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:18.989809   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:18.989740   61807 retry.go:31] will retry after 311.937657ms: waiting for machine to come up
	I1216 20:59:19.303619   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:19.304189   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:19.304221   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:19.304131   61807 retry.go:31] will retry after 515.638431ms: waiting for machine to come up
	I1216 20:59:19.821478   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:19.821955   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:19.821997   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:19.821900   61807 retry.go:31] will retry after 590.835789ms: waiting for machine to come up
	I1216 20:59:18.506840   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetIP
	I1216 20:59:18.510260   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:18.510654   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:18.510689   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:18.510875   60421 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1216 20:59:18.515632   60421 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 20:59:18.529943   60421 kubeadm.go:883] updating cluster {Name:no-preload-232338 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.32.0 ClusterName:no-preload-232338 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.240 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 20:59:18.530128   60421 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 20:59:18.530184   60421 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 20:59:18.569526   60421 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1216 20:59:18.569555   60421 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.32.0 registry.k8s.io/kube-controller-manager:v1.32.0 registry.k8s.io/kube-scheduler:v1.32.0 registry.k8s.io/kube-proxy:v1.32.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.16-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1216 20:59:18.569650   60421 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:59:18.569669   60421 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.16-0
	I1216 20:59:18.569688   60421 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1216 20:59:18.569651   60421 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.32.0
	I1216 20:59:18.569774   60421 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.32.0
	I1216 20:59:18.569859   60421 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 20:59:18.569859   60421 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1216 20:59:18.570294   60421 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.32.0
	I1216 20:59:18.571577   60421 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.32.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.32.0
	I1216 20:59:18.571602   60421 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.16-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.16-0
	I1216 20:59:18.571582   60421 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:59:18.571585   60421 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.32.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.32.0
	I1216 20:59:18.571583   60421 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1216 20:59:18.571580   60421 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.32.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 20:59:18.571828   60421 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.32.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.32.0
	I1216 20:59:18.571953   60421 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1216 20:59:18.781052   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.32.0
	I1216 20:59:18.783569   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.16-0
	I1216 20:59:18.795901   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 20:59:18.799273   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1216 20:59:18.801098   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.32.0
	I1216 20:59:18.802163   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1216 20:59:18.828334   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.32.0
	I1216 20:59:18.897880   60421 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.32.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.32.0" does not exist at hash "a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5" in container runtime
	I1216 20:59:18.897942   60421 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.32.0
	I1216 20:59:18.898003   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:18.910616   60421 cache_images.go:116] "registry.k8s.io/etcd:3.5.16-0" needs transfer: "registry.k8s.io/etcd:3.5.16-0" does not exist at hash "a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc" in container runtime
	I1216 20:59:18.910665   60421 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.16-0
	I1216 20:59:18.910713   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:18.937699   60421 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.32.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.32.0" does not exist at hash "8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3" in container runtime
	I1216 20:59:18.937753   60421 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 20:59:18.937804   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:18.979455   60421 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.32.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.32.0" does not exist at hash "c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4" in container runtime
	I1216 20:59:18.979500   60421 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.32.0
	I1216 20:59:18.979540   60421 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1216 20:59:18.979555   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:18.979586   60421 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1216 20:59:18.979636   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:19.002472   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:59:19.076177   60421 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.32.0" needs transfer: "registry.k8s.io/kube-proxy:v1.32.0" does not exist at hash "040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08" in container runtime
	I1216 20:59:19.076217   60421 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.32.0
	I1216 20:59:19.076237   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.0
	I1216 20:59:19.076252   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:19.076292   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I1216 20:59:19.076351   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 20:59:19.076408   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1216 20:59:19.076487   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.0
	I1216 20:59:19.076511   60421 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1216 20:59:19.076536   60421 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:59:19.076580   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:19.204766   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:59:19.204846   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1216 20:59:19.204904   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.0
	I1216 20:59:19.204959   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.0
	I1216 20:59:19.205097   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.0
	I1216 20:59:19.205212   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I1216 20:59:19.205285   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 20:59:19.365421   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.0
	I1216 20:59:19.365466   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:59:19.365512   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1216 20:59:19.365620   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.0
	I1216 20:59:19.365652   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.0
	I1216 20:59:19.365771   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 20:59:19.365861   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I1216 20:59:19.539614   60421 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1216 20:59:19.539729   60421 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1216 20:59:19.539740   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.0
	I1216 20:59:19.539740   60421 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.0
	I1216 20:59:19.539817   60421 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.0
	I1216 20:59:19.539839   60421 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.32.0
	I1216 20:59:19.539840   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:59:19.539885   60421 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.32.0
	I1216 20:59:19.539949   60421 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.0
	I1216 20:59:19.540000   60421 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0
	I1216 20:59:19.540029   60421 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.32.0
	I1216 20:59:19.540062   60421 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.16-0
	I1216 20:59:19.555043   60421 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.32.0 (exists)
	I1216 20:59:19.555076   60421 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.32.0
	I1216 20:59:19.555135   60421 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.32.0
	I1216 20:59:19.555251   60421 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1216 20:59:19.630857   60421 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.16-0 (exists)
	I1216 20:59:19.630949   60421 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1216 20:59:19.630983   60421 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.0
	I1216 20:59:19.631030   60421 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.32.0 (exists)
	I1216 20:59:19.631065   60421 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.32.0
	I1216 20:59:19.631104   60421 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.32.0 (exists)
	I1216 20:59:19.631069   60421 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1216 20:59:21.838285   60421 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.32.0: (2.283119694s)
	I1216 20:59:21.838328   60421 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.0 from cache
	I1216 20:59:21.838359   60421 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1216 20:59:21.838394   60421 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.20725659s)
	I1216 20:59:21.838414   60421 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1216 20:59:21.838421   60421 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1216 20:59:21.838361   60421 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.32.0: (2.207274997s)
	I1216 20:59:21.838471   60421 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.32.0 (exists)
	I1216 20:59:20.414932   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:20.415565   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:20.415597   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:20.415502   61807 retry.go:31] will retry after 698.152518ms: waiting for machine to come up
	I1216 20:59:21.115103   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:21.115597   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:21.115627   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:21.115543   61807 retry.go:31] will retry after 891.02308ms: waiting for machine to come up
	I1216 20:59:22.008636   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:22.009070   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:22.009098   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:22.009015   61807 retry.go:31] will retry after 923.634312ms: waiting for machine to come up
	I1216 20:59:22.934238   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:22.934753   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:22.934784   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:22.934697   61807 retry.go:31] will retry after 1.142718367s: waiting for machine to come up
	I1216 20:59:24.078935   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:24.079398   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:24.079429   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:24.079363   61807 retry.go:31] will retry after 1.541033224s: waiting for machine to come up
	I1216 20:59:23.901058   60421 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.062611423s)
	I1216 20:59:23.901091   60421 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1216 20:59:23.901122   60421 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.16-0
	I1216 20:59:23.901169   60421 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.16-0
	I1216 20:59:25.621932   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:25.622401   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:25.622433   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:25.622364   61807 retry.go:31] will retry after 2.600280234s: waiting for machine to come up
	I1216 20:59:28.224296   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:28.224874   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:28.224892   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:28.224828   61807 retry.go:31] will retry after 3.308841216s: waiting for machine to come up
	I1216 20:59:27.793238   60421 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.16-0: (3.892042799s)
	I1216 20:59:27.793280   60421 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 from cache
	I1216 20:59:27.793321   60421 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.32.0
	I1216 20:59:27.793420   60421 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.32.0
	I1216 20:59:29.552069   60421 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.32.0: (1.758623471s)
	I1216 20:59:29.552102   60421 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.0 from cache
	I1216 20:59:29.552130   60421 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.32.0
	I1216 20:59:29.552177   60421 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.32.0
	I1216 20:59:31.708930   60421 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.32.0: (2.156719559s)
	I1216 20:59:31.708971   60421 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.0 from cache
	I1216 20:59:31.709008   60421 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1216 20:59:31.709057   60421 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1216 20:59:32.660657   60421 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1216 20:59:32.660713   60421 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.32.0
	I1216 20:59:32.660775   60421 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.32.0
	I1216 20:59:31.537153   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:31.537735   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:31.537795   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:31.537710   61807 retry.go:31] will retry after 4.259700282s: waiting for machine to come up
	I1216 20:59:37.140408   60933 start.go:364] duration metric: took 4m2.637362394s to acquireMachinesLock for "old-k8s-version-847766"
	I1216 20:59:37.140483   60933 start.go:96] Skipping create...Using existing machine configuration
	I1216 20:59:37.140491   60933 fix.go:54] fixHost starting: 
	I1216 20:59:37.140933   60933 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:59:37.140988   60933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:59:37.159075   60933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39873
	I1216 20:59:37.159574   60933 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:59:37.160140   60933 main.go:141] libmachine: Using API Version  1
	I1216 20:59:37.160172   60933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:59:37.160560   60933 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:59:37.160773   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:37.160889   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetState
	I1216 20:59:37.162561   60933 fix.go:112] recreateIfNeeded on old-k8s-version-847766: state=Stopped err=<nil>
	I1216 20:59:37.162603   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	W1216 20:59:37.162755   60933 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 20:59:37.166031   60933 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-847766" ...
	I1216 20:59:34.634064   60421 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.32.0: (1.973261206s)
	I1216 20:59:34.634117   60421 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.0 from cache
	I1216 20:59:34.634154   60421 cache_images.go:123] Successfully loaded all cached images
	I1216 20:59:34.634160   60421 cache_images.go:92] duration metric: took 16.064590407s to LoadCachedImages
	I1216 20:59:34.634171   60421 kubeadm.go:934] updating node { 192.168.50.240 8443 v1.32.0 crio true true} ...
	I1216 20:59:34.634331   60421 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-232338 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:no-preload-232338 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 20:59:34.634420   60421 ssh_runner.go:195] Run: crio config
	I1216 20:59:34.688034   60421 cni.go:84] Creating CNI manager for ""
	I1216 20:59:34.688059   60421 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 20:59:34.688068   60421 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 20:59:34.688093   60421 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.240 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-232338 NodeName:no-preload-232338 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 20:59:34.688277   60421 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-232338"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.240"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.240"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 20:59:34.688356   60421 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1216 20:59:34.699709   60421 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 20:59:34.699784   60421 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 20:59:34.710306   60421 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1216 20:59:34.732401   60421 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 20:59:34.757561   60421 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1216 20:59:34.776094   60421 ssh_runner.go:195] Run: grep 192.168.50.240	control-plane.minikube.internal$ /etc/hosts
	I1216 20:59:34.780341   60421 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.240	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 20:59:34.794025   60421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:59:34.930543   60421 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 20:59:34.948720   60421 certs.go:68] Setting up /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338 for IP: 192.168.50.240
	I1216 20:59:34.948752   60421 certs.go:194] generating shared ca certs ...
	I1216 20:59:34.948776   60421 certs.go:226] acquiring lock for ca certs: {Name:mk7f8f83a04be3d39897a025f51d4d8228b5a509 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:59:34.949035   60421 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key
	I1216 20:59:34.949094   60421 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key
	I1216 20:59:34.949115   60421 certs.go:256] generating profile certs ...
	I1216 20:59:34.949243   60421 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/client.key
	I1216 20:59:34.949327   60421 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/apiserver.key.674e04e3
	I1216 20:59:34.949379   60421 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/proxy-client.key
	I1216 20:59:34.949509   60421 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem (1338 bytes)
	W1216 20:59:34.949547   60421 certs.go:480] ignoring /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254_empty.pem, impossibly tiny 0 bytes
	I1216 20:59:34.949557   60421 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 20:59:34.949582   60421 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem (1082 bytes)
	I1216 20:59:34.949604   60421 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem (1123 bytes)
	I1216 20:59:34.949627   60421 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem (1679 bytes)
	I1216 20:59:34.949662   60421 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem (1708 bytes)
	I1216 20:59:34.950648   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 20:59:34.994491   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 20:59:35.029853   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 20:59:35.058834   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 20:59:35.096870   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 20:59:35.126467   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 20:59:35.160826   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 20:59:35.186344   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 20:59:35.211125   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /usr/share/ca-certificates/142542.pem (1708 bytes)
	I1216 20:59:35.238705   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 20:59:35.266485   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem --> /usr/share/ca-certificates/14254.pem (1338 bytes)
	I1216 20:59:35.291729   60421 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 20:59:35.311939   60421 ssh_runner.go:195] Run: openssl version
	I1216 20:59:35.318397   60421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142542.pem && ln -fs /usr/share/ca-certificates/142542.pem /etc/ssl/certs/142542.pem"
	I1216 20:59:35.332081   60421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142542.pem
	I1216 20:59:35.336967   60421 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 19:42 /usr/share/ca-certificates/142542.pem
	I1216 20:59:35.337022   60421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142542.pem
	I1216 20:59:35.343307   60421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142542.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 20:59:35.356515   60421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 20:59:35.370380   60421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:59:35.375538   60421 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:59:35.375589   60421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:59:35.381736   60421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 20:59:35.395677   60421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14254.pem && ln -fs /usr/share/ca-certificates/14254.pem /etc/ssl/certs/14254.pem"
	I1216 20:59:35.409029   60421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14254.pem
	I1216 20:59:35.414358   60421 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 19:42 /usr/share/ca-certificates/14254.pem
	I1216 20:59:35.414427   60421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14254.pem
	I1216 20:59:35.421352   60421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14254.pem /etc/ssl/certs/51391683.0"
	I1216 20:59:35.435322   60421 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 20:59:35.440479   60421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 20:59:35.447408   60421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 20:59:35.453992   60421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 20:59:35.460713   60421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 20:59:35.467109   60421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 20:59:35.473412   60421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 20:59:35.479720   60421 kubeadm.go:392] StartCluster: {Name:no-preload-232338 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32
.0 ClusterName:no-preload-232338 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.240 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 20:59:35.479824   60421 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 20:59:35.479901   60421 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 20:59:35.521238   60421 cri.go:89] found id: ""
	I1216 20:59:35.521331   60421 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 20:59:35.534818   60421 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1216 20:59:35.534848   60421 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1216 20:59:35.534893   60421 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 20:59:35.547460   60421 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 20:59:35.548501   60421 kubeconfig.go:125] found "no-preload-232338" server: "https://192.168.50.240:8443"
	I1216 20:59:35.550575   60421 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 20:59:35.560957   60421 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.240
	I1216 20:59:35.561018   60421 kubeadm.go:1160] stopping kube-system containers ...
	I1216 20:59:35.561033   60421 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 20:59:35.561094   60421 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 20:59:35.598970   60421 cri.go:89] found id: ""
	I1216 20:59:35.599082   60421 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 20:59:35.618027   60421 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 20:59:35.629418   60421 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 20:59:35.629455   60421 kubeadm.go:157] found existing configuration files:
	
	I1216 20:59:35.629501   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 20:59:35.639825   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 20:59:35.639896   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 20:59:35.650676   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 20:59:35.662171   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 20:59:35.662228   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 20:59:35.674780   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 20:59:35.686565   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 20:59:35.686640   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 20:59:35.698956   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 20:59:35.710813   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 20:59:35.710874   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 20:59:35.723307   60421 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 20:59:35.734712   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:35.863375   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:37.021512   60421 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.158099337s)
	I1216 20:59:37.021546   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:37.269641   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:37.348978   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:37.428210   60421 api_server.go:52] waiting for apiserver process to appear ...
	I1216 20:59:37.428296   60421 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:59:35.800344   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.800861   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Found IP for machine: 192.168.39.162
	I1216 20:59:35.800889   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has current primary IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.800899   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Reserving static IP address...
	I1216 20:59:35.801367   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-327790", mac: "52:54:00:68:47:d5", ip: "192.168.39.162"} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:35.801395   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Reserved static IP address: 192.168.39.162
	I1216 20:59:35.801419   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | skip adding static IP to network mk-default-k8s-diff-port-327790 - found existing host DHCP lease matching {name: "default-k8s-diff-port-327790", mac: "52:54:00:68:47:d5", ip: "192.168.39.162"}
	I1216 20:59:35.801439   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for SSH to be available...
	I1216 20:59:35.801452   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Getting to WaitForSSH function...
	I1216 20:59:35.803875   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.804226   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:35.804257   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.804407   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Using SSH client type: external
	I1216 20:59:35.804439   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Using SSH private key: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa (-rw-------)
	I1216 20:59:35.804472   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.162 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1216 20:59:35.804493   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | About to run SSH command:
	I1216 20:59:35.804517   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | exit 0
	I1216 20:59:35.935325   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | SSH cmd err, output: <nil>: 
	I1216 20:59:35.935765   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetConfigRaw
	I1216 20:59:35.936442   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetIP
	I1216 20:59:35.938945   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.939369   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:35.939395   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.939654   60829 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/config.json ...
	I1216 20:59:35.939915   60829 machine.go:93] provisionDockerMachine start ...
	I1216 20:59:35.939938   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:59:35.940183   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:35.942412   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.942758   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:35.942787   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.942885   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:35.943067   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:35.943205   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:35.943347   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:35.943501   60829 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:35.943687   60829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1216 20:59:35.943697   60829 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 20:59:36.060257   60829 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1216 20:59:36.060297   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetMachineName
	I1216 20:59:36.060608   60829 buildroot.go:166] provisioning hostname "default-k8s-diff-port-327790"
	I1216 20:59:36.060634   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetMachineName
	I1216 20:59:36.060853   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:36.063758   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.064060   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:36.064097   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.064222   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:36.064427   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.064600   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.064745   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:36.064910   60829 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:36.065132   60829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1216 20:59:36.065151   60829 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-327790 && echo "default-k8s-diff-port-327790" | sudo tee /etc/hostname
	I1216 20:59:36.194522   60829 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-327790
	
	I1216 20:59:36.194555   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:36.197422   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.197770   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:36.197818   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.198007   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:36.198217   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.198446   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.198606   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:36.198803   60829 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:36.199037   60829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1216 20:59:36.199062   60829 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-327790' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-327790/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-327790' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 20:59:36.320779   60829 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 20:59:36.320808   60829 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20091-7083/.minikube CaCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20091-7083/.minikube}
	I1216 20:59:36.320833   60829 buildroot.go:174] setting up certificates
	I1216 20:59:36.320845   60829 provision.go:84] configureAuth start
	I1216 20:59:36.320854   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetMachineName
	I1216 20:59:36.321171   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetIP
	I1216 20:59:36.323701   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.324019   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:36.324044   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.324254   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:36.326002   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.326317   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:36.326348   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.326478   60829 provision.go:143] copyHostCerts
	I1216 20:59:36.326555   60829 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem, removing ...
	I1216 20:59:36.326567   60829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem
	I1216 20:59:36.326635   60829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem (1082 bytes)
	I1216 20:59:36.326747   60829 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem, removing ...
	I1216 20:59:36.326759   60829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem
	I1216 20:59:36.326786   60829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem (1123 bytes)
	I1216 20:59:36.326856   60829 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem, removing ...
	I1216 20:59:36.326866   60829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem
	I1216 20:59:36.326887   60829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem (1679 bytes)
	I1216 20:59:36.326949   60829 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-327790 san=[127.0.0.1 192.168.39.162 default-k8s-diff-port-327790 localhost minikube]
	I1216 20:59:36.480215   60829 provision.go:177] copyRemoteCerts
	I1216 20:59:36.480278   60829 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 20:59:36.480304   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:36.482859   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.483213   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:36.483258   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.483500   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:36.483712   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.483903   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:36.484087   60829 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 20:59:36.571252   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 20:59:36.599399   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1216 20:59:36.624194   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 20:59:36.649294   60829 provision.go:87] duration metric: took 328.437433ms to configureAuth
	I1216 20:59:36.649325   60829 buildroot.go:189] setting minikube options for container-runtime
	I1216 20:59:36.649494   60829 config.go:182] Loaded profile config "default-k8s-diff-port-327790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 20:59:36.649567   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:36.652411   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.652838   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:36.652868   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.653006   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:36.653264   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.653490   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.653704   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:36.653879   60829 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:36.654059   60829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1216 20:59:36.654076   60829 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 20:59:36.893006   60829 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 20:59:36.893043   60829 machine.go:96] duration metric: took 953.113126ms to provisionDockerMachine
	I1216 20:59:36.893057   60829 start.go:293] postStartSetup for "default-k8s-diff-port-327790" (driver="kvm2")
	I1216 20:59:36.893070   60829 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 20:59:36.893101   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:59:36.893466   60829 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 20:59:36.893494   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:36.896151   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.896531   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:36.896561   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.896683   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:36.896893   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.897100   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:36.897280   60829 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 20:59:36.982077   60829 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 20:59:36.986598   60829 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 20:59:36.986624   60829 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/addons for local assets ...
	I1216 20:59:36.986702   60829 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/files for local assets ...
	I1216 20:59:36.986795   60829 filesync.go:149] local asset: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem -> 142542.pem in /etc/ssl/certs
	I1216 20:59:36.986919   60829 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 20:59:36.996453   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /etc/ssl/certs/142542.pem (1708 bytes)
	I1216 20:59:37.021838   60829 start.go:296] duration metric: took 128.770799ms for postStartSetup
	I1216 20:59:37.021873   60829 fix.go:56] duration metric: took 19.961410312s for fixHost
	I1216 20:59:37.021896   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:37.024668   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.025171   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:37.025207   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.025369   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:37.025591   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:37.025746   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:37.025892   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:37.026040   60829 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:37.026257   60829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1216 20:59:37.026273   60829 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 20:59:37.140228   60829 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734382777.110726967
	
	I1216 20:59:37.140254   60829 fix.go:216] guest clock: 1734382777.110726967
	I1216 20:59:37.140264   60829 fix.go:229] Guest: 2024-12-16 20:59:37.110726967 +0000 UTC Remote: 2024-12-16 20:59:37.021877328 +0000 UTC m=+246.706572335 (delta=88.849639ms)
	I1216 20:59:37.140308   60829 fix.go:200] guest clock delta is within tolerance: 88.849639ms
	I1216 20:59:37.140315   60829 start.go:83] releasing machines lock for "default-k8s-diff-port-327790", held for 20.079880217s
	I1216 20:59:37.140347   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:59:37.140632   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetIP
	I1216 20:59:37.143268   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.143748   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:37.143775   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.143983   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:59:37.144601   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:59:37.144789   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:59:37.144883   60829 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 20:59:37.144930   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:37.145028   60829 ssh_runner.go:195] Run: cat /version.json
	I1216 20:59:37.145060   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:37.147817   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.148192   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:37.148219   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.148315   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.148364   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:37.148576   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:37.148755   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:37.148776   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.148804   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:37.148964   60829 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 20:59:37.149020   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:37.149141   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:37.149285   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:37.149439   60829 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 20:59:37.232354   60829 ssh_runner.go:195] Run: systemctl --version
	I1216 20:59:37.261803   60829 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 20:59:37.416094   60829 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 20:59:37.425458   60829 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 20:59:37.425566   60829 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 20:59:37.448873   60829 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 20:59:37.448914   60829 start.go:495] detecting cgroup driver to use...
	I1216 20:59:37.449014   60829 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 20:59:37.472474   60829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 20:59:37.492445   60829 docker.go:217] disabling cri-docker service (if available) ...
	I1216 20:59:37.492518   60829 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 20:59:37.510478   60829 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 20:59:37.525452   60829 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 20:59:37.642105   60829 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 20:59:37.814506   60829 docker.go:233] disabling docker service ...
	I1216 20:59:37.814590   60829 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 20:59:37.829046   60829 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 20:59:37.845049   60829 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 20:59:38.009931   60829 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 20:59:38.158000   60829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 20:59:38.174376   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 20:59:38.197489   60829 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1216 20:59:38.197555   60829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:38.213974   60829 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 20:59:38.214034   60829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:38.230383   60829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:38.244599   60829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:38.257574   60829 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 20:59:38.273377   60829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:38.285854   60829 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:38.312687   60829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:38.329105   60829 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 20:59:38.343596   60829 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 20:59:38.343679   60829 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 20:59:38.362530   60829 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 20:59:38.374384   60829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:59:38.564793   60829 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 20:59:38.682792   60829 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 20:59:38.682873   60829 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 20:59:38.689164   60829 start.go:563] Will wait 60s for crictl version
	I1216 20:59:38.689251   60829 ssh_runner.go:195] Run: which crictl
	I1216 20:59:38.693994   60829 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 20:59:38.746808   60829 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 20:59:38.746913   60829 ssh_runner.go:195] Run: crio --version
	I1216 20:59:38.788490   60829 ssh_runner.go:195] Run: crio --version
	I1216 20:59:38.823957   60829 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1216 20:59:37.167470   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .Start
	I1216 20:59:37.167715   60933 main.go:141] libmachine: (old-k8s-version-847766) Ensuring networks are active...
	I1216 20:59:37.168626   60933 main.go:141] libmachine: (old-k8s-version-847766) Ensuring network default is active
	I1216 20:59:37.169114   60933 main.go:141] libmachine: (old-k8s-version-847766) Ensuring network mk-old-k8s-version-847766 is active
	I1216 20:59:37.169670   60933 main.go:141] libmachine: (old-k8s-version-847766) Getting domain xml...
	I1216 20:59:37.170345   60933 main.go:141] libmachine: (old-k8s-version-847766) Creating domain...
	I1216 20:59:38.535579   60933 main.go:141] libmachine: (old-k8s-version-847766) Waiting to get IP...
	I1216 20:59:38.536661   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:38.537089   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:38.537174   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:38.537078   61973 retry.go:31] will retry after 277.62307ms: waiting for machine to come up
	I1216 20:59:38.816788   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:38.817329   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:38.817360   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:38.817272   61973 retry.go:31] will retry after 346.694382ms: waiting for machine to come up
	I1216 20:59:39.165778   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:39.166377   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:39.166436   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:39.166355   61973 retry.go:31] will retry after 416.599295ms: waiting for machine to come up
	I1216 20:59:38.825413   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetIP
	I1216 20:59:38.828442   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:38.828836   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:38.828870   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:38.829125   60829 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1216 20:59:38.833715   60829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 20:59:38.848989   60829 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-327790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.32.0 ClusterName:default-k8s-diff-port-327790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 20:59:38.849121   60829 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 20:59:38.849169   60829 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 20:59:38.891356   60829 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1216 20:59:38.891432   60829 ssh_runner.go:195] Run: which lz4
	I1216 20:59:38.896669   60829 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 20:59:38.901209   60829 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 20:59:38.901253   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1216 20:59:37.928929   60421 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:59:38.428939   60421 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:59:38.454184   60421 api_server.go:72] duration metric: took 1.02597754s to wait for apiserver process to appear ...
	I1216 20:59:38.454211   60421 api_server.go:88] waiting for apiserver healthz status ...
	I1216 20:59:38.454252   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:38.454842   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": dial tcp 192.168.50.240:8443: connect: connection refused
	I1216 20:59:38.954378   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:39.585259   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:39.585762   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:39.585791   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:39.585737   61973 retry.go:31] will retry after 526.969594ms: waiting for machine to come up
	I1216 20:59:40.114653   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:40.115175   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:40.115205   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:40.115140   61973 retry.go:31] will retry after 502.283372ms: waiting for machine to come up
	I1216 20:59:40.619067   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:40.619633   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:40.619682   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:40.619571   61973 retry.go:31] will retry after 764.799982ms: waiting for machine to come up
	I1216 20:59:41.385515   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:41.386066   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:41.386100   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:41.386027   61973 retry.go:31] will retry after 982.237202ms: waiting for machine to come up
	I1216 20:59:42.369934   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:42.370414   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:42.370449   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:42.370373   61973 retry.go:31] will retry after 1.163280736s: waiting for machine to come up
	I1216 20:59:43.534829   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:43.535194   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:43.535224   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:43.535143   61973 retry.go:31] will retry after 1.630958514s: waiting for machine to come up
	I1216 20:59:40.539994   60829 crio.go:462] duration metric: took 1.643361409s to copy over tarball
	I1216 20:59:40.540066   60829 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 20:59:42.840346   60829 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.30025199s)
	I1216 20:59:42.840382   60829 crio.go:469] duration metric: took 2.300357568s to extract the tarball
	I1216 20:59:42.840392   60829 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 20:59:42.881650   60829 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 20:59:42.928089   60829 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 20:59:42.928120   60829 cache_images.go:84] Images are preloaded, skipping loading
	I1216 20:59:42.928129   60829 kubeadm.go:934] updating node { 192.168.39.162 8444 v1.32.0 crio true true} ...
	I1216 20:59:42.928222   60829 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-327790 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:default-k8s-diff-port-327790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 20:59:42.928286   60829 ssh_runner.go:195] Run: crio config
	I1216 20:59:42.983315   60829 cni.go:84] Creating CNI manager for ""
	I1216 20:59:42.983348   60829 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 20:59:42.983360   60829 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 20:59:42.983396   60829 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.162 APIServerPort:8444 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-327790 NodeName:default-k8s-diff-port-327790 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.162"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.162 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 20:59:42.983556   60829 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.162
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-327790"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.162"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.162"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 20:59:42.983631   60829 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1216 20:59:42.996192   60829 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 20:59:42.996283   60829 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 20:59:43.008389   60829 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1216 20:59:43.027984   60829 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 20:59:43.045672   60829 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1216 20:59:43.063620   60829 ssh_runner.go:195] Run: grep 192.168.39.162	control-plane.minikube.internal$ /etc/hosts
	I1216 20:59:43.067925   60829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.162	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 20:59:43.082946   60829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:59:43.220929   60829 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 20:59:43.243843   60829 certs.go:68] Setting up /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790 for IP: 192.168.39.162
	I1216 20:59:43.243870   60829 certs.go:194] generating shared ca certs ...
	I1216 20:59:43.243888   60829 certs.go:226] acquiring lock for ca certs: {Name:mk7f8f83a04be3d39897a025f51d4d8228b5a509 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:59:43.244125   60829 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key
	I1216 20:59:43.244185   60829 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key
	I1216 20:59:43.244200   60829 certs.go:256] generating profile certs ...
	I1216 20:59:43.244324   60829 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/client.key
	I1216 20:59:43.244400   60829 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/apiserver.key.0f0bf709
	I1216 20:59:43.244449   60829 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/proxy-client.key
	I1216 20:59:43.244606   60829 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem (1338 bytes)
	W1216 20:59:43.244649   60829 certs.go:480] ignoring /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254_empty.pem, impossibly tiny 0 bytes
	I1216 20:59:43.244666   60829 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 20:59:43.244689   60829 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem (1082 bytes)
	I1216 20:59:43.244711   60829 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem (1123 bytes)
	I1216 20:59:43.244731   60829 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem (1679 bytes)
	I1216 20:59:43.244776   60829 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem (1708 bytes)
	I1216 20:59:43.245449   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 20:59:43.283598   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 20:59:43.309321   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 20:59:43.343071   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 20:59:43.379763   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1216 20:59:43.409794   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 20:59:43.437074   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 20:59:43.462616   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 20:59:43.487711   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 20:59:43.512636   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem --> /usr/share/ca-certificates/14254.pem (1338 bytes)
	I1216 20:59:43.539050   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /usr/share/ca-certificates/142542.pem (1708 bytes)
	I1216 20:59:43.566507   60829 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 20:59:43.584425   60829 ssh_runner.go:195] Run: openssl version
	I1216 20:59:43.590996   60829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 20:59:43.604384   60829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:59:43.609342   60829 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:59:43.609404   60829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:59:43.615902   60829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 20:59:43.627432   60829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14254.pem && ln -fs /usr/share/ca-certificates/14254.pem /etc/ssl/certs/14254.pem"
	I1216 20:59:43.638929   60829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14254.pem
	I1216 20:59:43.644189   60829 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 19:42 /usr/share/ca-certificates/14254.pem
	I1216 20:59:43.644267   60829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14254.pem
	I1216 20:59:43.650550   60829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14254.pem /etc/ssl/certs/51391683.0"
	I1216 20:59:43.662678   60829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142542.pem && ln -fs /usr/share/ca-certificates/142542.pem /etc/ssl/certs/142542.pem"
	I1216 20:59:43.674981   60829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142542.pem
	I1216 20:59:43.680022   60829 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 19:42 /usr/share/ca-certificates/142542.pem
	I1216 20:59:43.680113   60829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142542.pem
	I1216 20:59:43.686159   60829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142542.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 20:59:43.697897   60829 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 20:59:43.702835   60829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 20:59:43.709262   60829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 20:59:43.716370   60829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 20:59:43.725031   60829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 20:59:43.732876   60829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 20:59:43.739810   60829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 20:59:43.746998   60829 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-327790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.32.0 ClusterName:default-k8s-diff-port-327790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 20:59:43.747131   60829 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 20:59:43.747189   60829 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 20:59:43.791895   60829 cri.go:89] found id: ""
	I1216 20:59:43.791979   60829 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 20:59:43.802858   60829 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1216 20:59:43.802886   60829 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1216 20:59:43.802943   60829 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 20:59:43.813313   60829 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 20:59:43.814296   60829 kubeconfig.go:125] found "default-k8s-diff-port-327790" server: "https://192.168.39.162:8444"
	I1216 20:59:43.816374   60829 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 20:59:43.825834   60829 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.162
	I1216 20:59:43.825871   60829 kubeadm.go:1160] stopping kube-system containers ...
	I1216 20:59:43.825884   60829 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 20:59:43.825934   60829 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 20:59:43.870890   60829 cri.go:89] found id: ""
	I1216 20:59:43.870965   60829 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 20:59:43.888155   60829 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 20:59:43.898356   60829 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 20:59:43.898381   60829 kubeadm.go:157] found existing configuration files:
	
	I1216 20:59:43.898445   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1216 20:59:43.908232   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 20:59:43.908310   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 20:59:43.918637   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1216 20:59:43.928255   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 20:59:43.928343   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 20:59:43.938479   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1216 20:59:43.948085   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 20:59:43.948157   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 20:59:43.959080   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1216 20:59:43.969218   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 20:59:43.969275   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 20:59:43.980063   60829 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 20:59:43.990768   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:44.125741   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:44.845177   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:45.049512   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:45.162055   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:45.284927   60829 api_server.go:52] waiting for apiserver process to appear ...
	I1216 20:59:45.285036   60829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:59:43.954985   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 20:59:43.955087   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:45.168144   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:45.168719   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:45.168750   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:45.168671   61973 retry.go:31] will retry after 1.835631107s: waiting for machine to come up
	I1216 20:59:47.005854   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:47.006380   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:47.006422   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:47.006339   61973 retry.go:31] will retry after 1.943800898s: waiting for machine to come up
	I1216 20:59:48.951552   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:48.952050   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:48.952114   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:48.952008   61973 retry.go:31] will retry after 2.949898251s: waiting for machine to come up
	I1216 20:59:45.785964   60829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:59:46.285989   60829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:59:46.339555   60829 api_server.go:72] duration metric: took 1.054628295s to wait for apiserver process to appear ...
	I1216 20:59:46.339597   60829 api_server.go:88] waiting for apiserver healthz status ...
	I1216 20:59:46.339636   60829 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1216 20:59:46.340197   60829 api_server.go:269] stopped: https://192.168.39.162:8444/healthz: Get "https://192.168.39.162:8444/healthz": dial tcp 192.168.39.162:8444: connect: connection refused
	I1216 20:59:46.839771   60829 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1216 20:59:49.461907   60829 api_server.go:279] https://192.168.39.162:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 20:59:49.461943   60829 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 20:59:49.461958   60829 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1216 20:59:49.513069   60829 api_server.go:279] https://192.168.39.162:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 20:59:49.513121   60829 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 20:59:49.840517   60829 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1216 20:59:49.846051   60829 api_server.go:279] https://192.168.39.162:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 20:59:49.846086   60829 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 20:59:50.339824   60829 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1216 20:59:50.347663   60829 api_server.go:279] https://192.168.39.162:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 20:59:50.347708   60829 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 20:59:50.840385   60829 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1216 20:59:50.844943   60829 api_server.go:279] https://192.168.39.162:8444/healthz returned 200:
	ok
	I1216 20:59:50.854518   60829 api_server.go:141] control plane version: v1.32.0
	I1216 20:59:50.854546   60829 api_server.go:131] duration metric: took 4.514941385s to wait for apiserver health ...
	I1216 20:59:50.854554   60829 cni.go:84] Creating CNI manager for ""
	I1216 20:59:50.854560   60829 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 20:59:50.856538   60829 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 20:59:48.956352   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 20:59:48.956414   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:51.905108   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:51.905560   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:51.905594   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:51.905505   61973 retry.go:31] will retry after 3.44069953s: waiting for machine to come up
	I1216 20:59:50.858169   60829 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 20:59:50.882809   60829 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 20:59:50.912787   60829 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 20:59:50.933650   60829 system_pods.go:59] 8 kube-system pods found
	I1216 20:59:50.933693   60829 system_pods.go:61] "coredns-668d6bf9bc-tqh9s" [56b4db37-b6bc-49eb-b45f-b8b4d1f16eed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 20:59:50.933705   60829 system_pods.go:61] "etcd-default-k8s-diff-port-327790" [067f7c41-3763-42d3-af06-ad50fad3d206] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 20:59:50.933713   60829 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-327790" [f1964b5b-9d2b-4f82-afc6-2f359c9b8827] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 20:59:50.933722   60829 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-327790" [fd7479e3-be26-4bb0-b53a-e40766a33996] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 20:59:50.933742   60829 system_pods.go:61] "kube-proxy-mplxr" [027abdc5-7022-4528-a93f-36f3b10115ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 20:59:50.933751   60829 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-327790" [d7416a53-ccb4-46fd-9992-46cbf7ec0a3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 20:59:50.933763   60829 system_pods.go:61] "metrics-server-f79f97bbb-hlt7s" [d42906e3-387c-493e-9d06-5bb654dc9784] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 20:59:50.933772   60829 system_pods.go:61] "storage-provisioner" [c774635a-faca-4a1a-8f4e-2161447ebaa1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 20:59:50.933785   60829 system_pods.go:74] duration metric: took 20.968988ms to wait for pod list to return data ...
	I1216 20:59:50.933804   60829 node_conditions.go:102] verifying NodePressure condition ...
	I1216 20:59:50.937958   60829 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 20:59:50.937986   60829 node_conditions.go:123] node cpu capacity is 2
	I1216 20:59:50.938008   60829 node_conditions.go:105] duration metric: took 4.196302ms to run NodePressure ...
	I1216 20:59:50.938030   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:51.231412   60829 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1216 20:59:51.236005   60829 kubeadm.go:739] kubelet initialised
	I1216 20:59:51.236029   60829 kubeadm.go:740] duration metric: took 4.585977ms waiting for restarted kubelet to initialise ...
	I1216 20:59:51.236042   60829 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 20:59:51.243608   60829 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-tqh9s" in "kube-system" namespace to be "Ready" ...
	I1216 20:59:53.250907   60829 pod_ready.go:103] pod "coredns-668d6bf9bc-tqh9s" in "kube-system" namespace has status "Ready":"False"
	I1216 20:59:56.696377   60215 start.go:364] duration metric: took 54.44579772s to acquireMachinesLock for "embed-certs-606219"
	I1216 20:59:56.696450   60215 start.go:96] Skipping create...Using existing machine configuration
	I1216 20:59:56.696470   60215 fix.go:54] fixHost starting: 
	I1216 20:59:56.696862   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:59:56.696902   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:59:56.714627   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42069
	I1216 20:59:56.715074   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:59:56.715599   60215 main.go:141] libmachine: Using API Version  1
	I1216 20:59:56.715629   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:59:56.715953   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:59:56.716116   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 20:59:56.716252   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetState
	I1216 20:59:56.717876   60215 fix.go:112] recreateIfNeeded on embed-certs-606219: state=Stopped err=<nil>
	I1216 20:59:56.717902   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	W1216 20:59:56.718088   60215 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 20:59:56.720072   60215 out.go:177] * Restarting existing kvm2 VM for "embed-certs-606219" ...
	I1216 20:59:53.957328   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 20:59:53.957395   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:55.349557   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.350105   60933 main.go:141] libmachine: (old-k8s-version-847766) Found IP for machine: 192.168.72.240
	I1216 20:59:55.350129   60933 main.go:141] libmachine: (old-k8s-version-847766) Reserving static IP address...
	I1216 20:59:55.350140   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has current primary IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.350574   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "old-k8s-version-847766", mac: "52:54:00:c4:f2:8a", ip: "192.168.72.240"} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.350608   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | skip adding static IP to network mk-old-k8s-version-847766 - found existing host DHCP lease matching {name: "old-k8s-version-847766", mac: "52:54:00:c4:f2:8a", ip: "192.168.72.240"}
	I1216 20:59:55.350623   60933 main.go:141] libmachine: (old-k8s-version-847766) Reserved static IP address: 192.168.72.240
	I1216 20:59:55.350646   60933 main.go:141] libmachine: (old-k8s-version-847766) Waiting for SSH to be available...
	I1216 20:59:55.350662   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | Getting to WaitForSSH function...
	I1216 20:59:55.353011   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.353346   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.353369   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.353535   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | Using SSH client type: external
	I1216 20:59:55.353560   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | Using SSH private key: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa (-rw-------)
	I1216 20:59:55.353592   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1216 20:59:55.353606   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | About to run SSH command:
	I1216 20:59:55.353621   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | exit 0
	I1216 20:59:55.480726   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | SSH cmd err, output: <nil>: 
	I1216 20:59:55.481062   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetConfigRaw
	I1216 20:59:55.481692   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetIP
	I1216 20:59:55.484113   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.484500   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.484537   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.484769   60933 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/config.json ...
	I1216 20:59:55.484985   60933 machine.go:93] provisionDockerMachine start ...
	I1216 20:59:55.485008   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:55.485220   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:55.487511   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.487835   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.487862   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.487958   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:55.488134   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:55.488268   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:55.488405   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:55.488546   60933 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:55.488725   60933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I1216 20:59:55.488735   60933 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 20:59:55.596092   60933 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1216 20:59:55.596127   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetMachineName
	I1216 20:59:55.596401   60933 buildroot.go:166] provisioning hostname "old-k8s-version-847766"
	I1216 20:59:55.596426   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetMachineName
	I1216 20:59:55.596644   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:55.599286   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.599631   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.599662   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.599814   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:55.600010   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:55.600166   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:55.600299   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:55.600462   60933 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:55.600665   60933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I1216 20:59:55.600678   60933 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-847766 && echo "old-k8s-version-847766" | sudo tee /etc/hostname
	I1216 20:59:55.731851   60933 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-847766
	
	I1216 20:59:55.731879   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:55.734802   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.735155   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.735186   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.735422   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:55.735650   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:55.735815   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:55.736030   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:55.736194   60933 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:55.736377   60933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I1216 20:59:55.736393   60933 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-847766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-847766/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-847766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 20:59:55.857050   60933 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 20:59:55.857108   60933 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20091-7083/.minikube CaCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20091-7083/.minikube}
	I1216 20:59:55.857138   60933 buildroot.go:174] setting up certificates
	I1216 20:59:55.857163   60933 provision.go:84] configureAuth start
	I1216 20:59:55.857180   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetMachineName
	I1216 20:59:55.857505   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetIP
	I1216 20:59:55.860286   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.860613   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.860643   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.860826   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:55.863292   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.863682   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.863709   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.863871   60933 provision.go:143] copyHostCerts
	I1216 20:59:55.863920   60933 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem, removing ...
	I1216 20:59:55.863929   60933 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem
	I1216 20:59:55.863986   60933 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem (1679 bytes)
	I1216 20:59:55.864069   60933 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem, removing ...
	I1216 20:59:55.864077   60933 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem
	I1216 20:59:55.864104   60933 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem (1082 bytes)
	I1216 20:59:55.864159   60933 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem, removing ...
	I1216 20:59:55.864177   60933 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem
	I1216 20:59:55.864202   60933 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem (1123 bytes)
	I1216 20:59:55.864250   60933 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-847766 san=[127.0.0.1 192.168.72.240 localhost minikube old-k8s-version-847766]
	I1216 20:59:56.058548   60933 provision.go:177] copyRemoteCerts
	I1216 20:59:56.058603   60933 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 20:59:56.058638   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:56.061354   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.061666   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.061707   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.061838   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:56.062039   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.062200   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:56.062356   60933 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa Username:docker}
	I1216 20:59:56.146788   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 20:59:56.172789   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1216 20:59:56.198040   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 20:59:56.222476   60933 provision.go:87] duration metric: took 365.299433ms to configureAuth
	I1216 20:59:56.222505   60933 buildroot.go:189] setting minikube options for container-runtime
	I1216 20:59:56.222706   60933 config.go:182] Loaded profile config "old-k8s-version-847766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1216 20:59:56.222790   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:56.225376   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.225752   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.225779   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.225965   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:56.226182   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.226363   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.226516   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:56.226687   60933 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:56.226887   60933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I1216 20:59:56.226906   60933 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 20:59:56.451434   60933 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 20:59:56.451464   60933 machine.go:96] duration metric: took 966.463181ms to provisionDockerMachine
	I1216 20:59:56.451478   60933 start.go:293] postStartSetup for "old-k8s-version-847766" (driver="kvm2")
	I1216 20:59:56.451513   60933 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 20:59:56.451541   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:56.451926   60933 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 20:59:56.451980   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:56.454840   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.455302   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.455331   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.455454   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:56.455661   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.455814   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:56.455988   60933 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa Username:docker}
	I1216 20:59:56.542904   60933 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 20:59:56.547362   60933 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 20:59:56.547389   60933 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/addons for local assets ...
	I1216 20:59:56.547467   60933 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/files for local assets ...
	I1216 20:59:56.547568   60933 filesync.go:149] local asset: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem -> 142542.pem in /etc/ssl/certs
	I1216 20:59:56.547677   60933 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 20:59:56.557902   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /etc/ssl/certs/142542.pem (1708 bytes)
	I1216 20:59:56.582796   60933 start.go:296] duration metric: took 131.303406ms for postStartSetup
	I1216 20:59:56.582846   60933 fix.go:56] duration metric: took 19.442354832s for fixHost
	I1216 20:59:56.582872   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:56.585478   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.585803   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.585831   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.586011   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:56.586194   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.586358   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.586472   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:56.586640   60933 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:56.586809   60933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I1216 20:59:56.586819   60933 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 20:59:56.696254   60933 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734382796.650794736
	
	I1216 20:59:56.696274   60933 fix.go:216] guest clock: 1734382796.650794736
	I1216 20:59:56.696281   60933 fix.go:229] Guest: 2024-12-16 20:59:56.650794736 +0000 UTC Remote: 2024-12-16 20:59:56.582851742 +0000 UTC m=+262.230512454 (delta=67.942994ms)
	I1216 20:59:56.696299   60933 fix.go:200] guest clock delta is within tolerance: 67.942994ms
	I1216 20:59:56.696304   60933 start.go:83] releasing machines lock for "old-k8s-version-847766", held for 19.555844424s
	I1216 20:59:56.696333   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:56.696608   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetIP
	I1216 20:59:56.699486   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.699932   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.699964   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.700068   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:56.700645   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:56.700846   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:56.700948   60933 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 20:59:56.701007   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:56.701115   60933 ssh_runner.go:195] Run: cat /version.json
	I1216 20:59:56.701140   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:56.703937   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.704117   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.704314   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.704342   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.704496   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:56.704567   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.704601   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.704680   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.704762   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:56.704836   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:56.704982   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.704987   60933 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa Username:docker}
	I1216 20:59:56.705134   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:56.705259   60933 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa Username:docker}
	I1216 20:59:56.784295   60933 ssh_runner.go:195] Run: systemctl --version
	I1216 20:59:56.817481   60933 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 20:59:56.968124   60933 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 20:59:56.979827   60933 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 20:59:56.979892   60933 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 20:59:56.997867   60933 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 20:59:56.997891   60933 start.go:495] detecting cgroup driver to use...
	I1216 20:59:56.997954   60933 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 20:59:57.016064   60933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 20:59:57.031596   60933 docker.go:217] disabling cri-docker service (if available) ...
	I1216 20:59:57.031665   60933 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 20:59:57.047562   60933 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 20:59:57.062737   60933 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 20:59:57.183918   60933 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 20:59:57.354699   60933 docker.go:233] disabling docker service ...
	I1216 20:59:57.354794   60933 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 20:59:57.373311   60933 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 20:59:57.390014   60933 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 20:59:57.523623   60933 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 20:59:57.656261   60933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 20:59:57.671374   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 20:59:57.692647   60933 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1216 20:59:57.692709   60933 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:57.704496   60933 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 20:59:57.704548   60933 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:57.715848   60933 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:57.727022   60933 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:57.738899   60933 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 20:59:57.756457   60933 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 20:59:57.773236   60933 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 20:59:57.773289   60933 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 20:59:57.789209   60933 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 20:59:57.800881   60933 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:59:57.927794   60933 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 20:59:58.038173   60933 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 20:59:58.038256   60933 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 20:59:58.044633   60933 start.go:563] Will wait 60s for crictl version
	I1216 20:59:58.044705   60933 ssh_runner.go:195] Run: which crictl
	I1216 20:59:58.048781   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 20:59:58.088449   60933 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 20:59:58.088579   60933 ssh_runner.go:195] Run: crio --version
	I1216 20:59:58.119211   60933 ssh_runner.go:195] Run: crio --version
	I1216 20:59:58.151411   60933 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1216 20:59:58.152582   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetIP
	I1216 20:59:58.155196   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:58.155558   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:58.155587   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:58.155763   60933 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1216 20:59:58.160369   60933 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 20:59:58.174013   60933 kubeadm.go:883] updating cluster {Name:old-k8s-version-847766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-847766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 20:59:58.174155   60933 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1216 20:59:58.174212   60933 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 20:59:58.226674   60933 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1216 20:59:58.226747   60933 ssh_runner.go:195] Run: which lz4
	I1216 20:59:58.231330   60933 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 20:59:58.236178   60933 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 20:59:58.236214   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1216 20:59:56.721746   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Start
	I1216 20:59:56.721946   60215 main.go:141] libmachine: (embed-certs-606219) Ensuring networks are active...
	I1216 20:59:56.722810   60215 main.go:141] libmachine: (embed-certs-606219) Ensuring network default is active
	I1216 20:59:56.723209   60215 main.go:141] libmachine: (embed-certs-606219) Ensuring network mk-embed-certs-606219 is active
	I1216 20:59:56.723644   60215 main.go:141] libmachine: (embed-certs-606219) Getting domain xml...
	I1216 20:59:56.724387   60215 main.go:141] libmachine: (embed-certs-606219) Creating domain...
	I1216 20:59:58.005906   60215 main.go:141] libmachine: (embed-certs-606219) Waiting to get IP...
	I1216 20:59:58.006646   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 20:59:58.007021   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 20:59:58.007136   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 20:59:58.007017   62108 retry.go:31] will retry after 280.124694ms: waiting for machine to come up
	I1216 20:59:58.288552   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 20:59:58.289049   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 20:59:58.289078   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 20:59:58.289013   62108 retry.go:31] will retry after 299.873899ms: waiting for machine to come up
	I1216 20:59:58.590757   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 20:59:58.591593   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 20:59:58.591625   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 20:59:58.591487   62108 retry.go:31] will retry after 486.884982ms: waiting for machine to come up
	I1216 20:59:59.079996   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 20:59:59.080618   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 20:59:59.080649   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 20:59:59.080581   62108 retry.go:31] will retry after 608.856993ms: waiting for machine to come up
	I1216 20:59:59.691549   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 20:59:59.692107   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 20:59:59.692139   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 20:59:59.692064   62108 retry.go:31] will retry after 730.774006ms: waiting for machine to come up
	I1216 20:59:55.752607   60829 pod_ready.go:103] pod "coredns-668d6bf9bc-tqh9s" in "kube-system" namespace has status "Ready":"False"
	I1216 20:59:58.251902   60829 pod_ready.go:103] pod "coredns-668d6bf9bc-tqh9s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:00.254126   60829 pod_ready.go:103] pod "coredns-668d6bf9bc-tqh9s" in "kube-system" namespace has status "Ready":"False"
	I1216 20:59:58.958114   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 20:59:58.958161   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:59.567722   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": read tcp 192.168.50.1:38738->192.168.50.240:8443: read: connection reset by peer
	I1216 20:59:59.567773   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:59.568271   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": dial tcp 192.168.50.240:8443: connect: connection refused
	I1216 20:59:59.954745   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:59.955447   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": dial tcp 192.168.50.240:8443: connect: connection refused
	I1216 21:00:00.455116   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:00.456036   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": dial tcp 192.168.50.240:8443: connect: connection refused
	I1216 21:00:00.954418   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:00.100507   60933 crio.go:462] duration metric: took 1.869217257s to copy over tarball
	I1216 21:00:00.100619   60933 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 21:00:03.581430   60933 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.480755636s)
	I1216 21:00:03.581469   60933 crio.go:469] duration metric: took 3.480924144s to extract the tarball
	I1216 21:00:03.581478   60933 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 21:00:03.627932   60933 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 21:00:03.667985   60933 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1216 21:00:03.668013   60933 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1216 21:00:03.668078   60933 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 21:00:03.668110   60933 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:03.668207   60933 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:03.668262   60933 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1216 21:00:03.668262   60933 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:03.668332   60933 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1216 21:00:03.668215   60933 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:03.668092   60933 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:03.670096   60933 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:03.670294   60933 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:03.670305   60933 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:03.670305   60933 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:03.670333   60933 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1216 21:00:03.670394   60933 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1216 21:00:03.670396   60933 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 21:00:03.670467   60933 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:03.861573   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1216 21:00:03.869704   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:03.885911   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:03.904748   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1216 21:00:03.905328   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:03.906138   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:03.936548   60933 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1216 21:00:03.936658   60933 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1216 21:00:03.936736   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.019039   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:04.033811   60933 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1216 21:00:04.033863   60933 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:04.033927   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.082946   60933 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1216 21:00:04.082995   60933 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:04.083008   60933 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1216 21:00:04.083050   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.083055   60933 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1216 21:00:04.083063   60933 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1216 21:00:04.083073   60933 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:04.083133   60933 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1216 21:00:04.083203   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1216 21:00:04.083205   60933 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:04.083306   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.083145   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.083139   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.123434   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:04.123702   60933 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1216 21:00:04.123740   60933 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:04.123786   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.150878   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 21:00:04.155586   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:04.155774   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:04.155877   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1216 21:00:04.155968   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:04.156205   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1216 21:00:04.226110   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:04.226429   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:00.424272   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:00.424766   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:00.424795   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:00.424712   62108 retry.go:31] will retry after 947.177724ms: waiting for machine to come up
	I1216 21:00:01.373798   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:01.374448   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:01.374486   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:01.374376   62108 retry.go:31] will retry after 755.735247ms: waiting for machine to come up
	I1216 21:00:02.132092   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:02.132690   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:02.132716   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:02.132636   62108 retry.go:31] will retry after 1.25933291s: waiting for machine to come up
	I1216 21:00:03.393390   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:03.393951   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:03.393987   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:03.393887   62108 retry.go:31] will retry after 1.654271195s: waiting for machine to come up
	I1216 21:00:00.768561   60829 pod_ready.go:93] pod "coredns-668d6bf9bc-tqh9s" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:00.768603   60829 pod_ready.go:82] duration metric: took 9.524968022s for pod "coredns-668d6bf9bc-tqh9s" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:00.768619   60829 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:02.778467   60829 pod_ready.go:93] pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:02.778507   60829 pod_ready.go:82] duration metric: took 2.009878604s for pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:02.778523   60829 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:03.290454   60829 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:03.290490   60829 pod_ready.go:82] duration metric: took 511.956426ms for pod "kube-apiserver-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:03.290505   60829 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:04.533609   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 21:00:04.533639   60421 api_server.go:103] status: https://192.168.50.240:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 21:00:04.533655   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:04.679801   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 21:00:04.679836   60421 api_server.go:103] status: https://192.168.50.240:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 21:00:04.955306   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:05.723827   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 21:00:05.723870   60421 api_server.go:103] status: https://192.168.50.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 21:00:05.723892   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:05.750638   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 21:00:05.750674   60421 api_server.go:103] status: https://192.168.50.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 21:00:05.955092   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:05.983280   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 21:00:05.983332   60421 api_server.go:103] status: https://192.168.50.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 21:00:06.454742   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:06.467886   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 21:00:06.467924   60421 api_server.go:103] status: https://192.168.50.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 21:00:06.954428   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:06.960039   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 200:
	ok
	I1216 21:00:06.969187   60421 api_server.go:141] control plane version: v1.32.0
	I1216 21:00:06.969231   60421 api_server.go:131] duration metric: took 28.515011952s to wait for apiserver health ...
	I1216 21:00:06.969242   60421 cni.go:84] Creating CNI manager for ""
	I1216 21:00:06.969249   60421 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:00:06.971475   60421 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 21:00:06.973035   60421 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 21:00:06.992348   60421 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 21:00:07.020819   60421 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 21:00:07.035254   60421 system_pods.go:59] 8 kube-system pods found
	I1216 21:00:07.035308   60421 system_pods.go:61] "coredns-668d6bf9bc-snhjf" [c0cf42c8-521a-4d02-9d43-ff7a700b0eca] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:00:07.035321   60421 system_pods.go:61] "etcd-no-preload-232338" [01ca2051-5953-44fd-bfff-40aa16ec7aca] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 21:00:07.035335   60421 system_pods.go:61] "kube-apiserver-no-preload-232338" [f1fbbb3b-a0e5-4200-89ef-67085e51a31d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 21:00:07.035359   60421 system_pods.go:61] "kube-controller-manager-no-preload-232338" [200039ad-1a2c-4dc4-8307-d8c882d69f1b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 21:00:07.035373   60421 system_pods.go:61] "kube-proxy-5mw2b" [8fbddf14-8697-451a-a3c7-873fdd437247] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 21:00:07.035382   60421 system_pods.go:61] "kube-scheduler-no-preload-232338" [1b9a7a43-59fc-44ba-9863-04fb90e6554f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 21:00:07.035396   60421 system_pods.go:61] "metrics-server-f79f97bbb-5xf67" [447144e5-11d8-48f7-b2fd-7ab9fb3c04de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:00:07.035409   60421 system_pods.go:61] "storage-provisioner" [fb293bd2-f5be-4086-b821-ffd7df58dd5e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 21:00:07.035420   60421 system_pods.go:74] duration metric: took 14.571089ms to wait for pod list to return data ...
	I1216 21:00:07.035431   60421 node_conditions.go:102] verifying NodePressure condition ...
	I1216 21:00:07.044467   60421 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 21:00:07.044592   60421 node_conditions.go:123] node cpu capacity is 2
	I1216 21:00:07.044633   60421 node_conditions.go:105] duration metric: took 9.191874ms to run NodePressure ...
	I1216 21:00:07.044668   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:07.388388   60421 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1216 21:00:07.394851   60421 kubeadm.go:739] kubelet initialised
	I1216 21:00:07.394881   60421 kubeadm.go:740] duration metric: took 6.459945ms waiting for restarted kubelet to initialise ...
	I1216 21:00:07.394891   60421 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:00:07.401877   60421 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-snhjf" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:07.410697   60421 pod_ready.go:98] node "no-preload-232338" hosting pod "coredns-668d6bf9bc-snhjf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.410732   60421 pod_ready.go:82] duration metric: took 8.80876ms for pod "coredns-668d6bf9bc-snhjf" in "kube-system" namespace to be "Ready" ...
	E1216 21:00:07.410744   60421 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-232338" hosting pod "coredns-668d6bf9bc-snhjf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.410755   60421 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:07.418118   60421 pod_ready.go:98] node "no-preload-232338" hosting pod "etcd-no-preload-232338" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.418149   60421 pod_ready.go:82] duration metric: took 7.383445ms for pod "etcd-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	E1216 21:00:07.418163   60421 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-232338" hosting pod "etcd-no-preload-232338" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.418172   60421 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:07.427341   60421 pod_ready.go:98] node "no-preload-232338" hosting pod "kube-apiserver-no-preload-232338" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.427414   60421 pod_ready.go:82] duration metric: took 9.234588ms for pod "kube-apiserver-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	E1216 21:00:07.427424   60421 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-232338" hosting pod "kube-apiserver-no-preload-232338" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.427432   60421 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:07.435329   60421 pod_ready.go:98] node "no-preload-232338" hosting pod "kube-controller-manager-no-preload-232338" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.435378   60421 pod_ready.go:82] duration metric: took 7.931923ms for pod "kube-controller-manager-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	E1216 21:00:07.435392   60421 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-232338" hosting pod "kube-controller-manager-no-preload-232338" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.435408   60421 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5mw2b" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:04.457220   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:04.457399   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:04.457507   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:04.457596   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1216 21:00:04.457687   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1216 21:00:04.613834   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:04.613870   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:04.613923   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:04.613931   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:04.613960   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:04.613972   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1216 21:00:04.619915   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1216 21:00:04.791265   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1216 21:00:04.791297   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:04.791315   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1216 21:00:04.791352   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1216 21:00:04.791366   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1216 21:00:04.791384   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1216 21:00:04.836463   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1216 21:00:04.836536   60933 cache_images.go:92] duration metric: took 1.168508622s to LoadCachedImages
	W1216 21:00:04.836633   60933 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I1216 21:00:04.836649   60933 kubeadm.go:934] updating node { 192.168.72.240 8443 v1.20.0 crio true true} ...
	I1216 21:00:04.836781   60933 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-847766 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-847766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 21:00:04.836877   60933 ssh_runner.go:195] Run: crio config
	I1216 21:00:04.898330   60933 cni.go:84] Creating CNI manager for ""
	I1216 21:00:04.898357   60933 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:00:04.898371   60933 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 21:00:04.898396   60933 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.240 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-847766 NodeName:old-k8s-version-847766 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1216 21:00:04.898560   60933 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-847766"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.240
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.240"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 21:00:04.898643   60933 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1216 21:00:04.910946   60933 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 21:00:04.911045   60933 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 21:00:04.923199   60933 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1216 21:00:04.942705   60933 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 21:00:04.976598   60933 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1216 21:00:05.001967   60933 ssh_runner.go:195] Run: grep 192.168.72.240	control-plane.minikube.internal$ /etc/hosts
	I1216 21:00:05.006819   60933 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.240	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 21:00:05.020604   60933 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 21:00:05.143039   60933 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 21:00:05.162507   60933 certs.go:68] Setting up /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766 for IP: 192.168.72.240
	I1216 21:00:05.162535   60933 certs.go:194] generating shared ca certs ...
	I1216 21:00:05.162554   60933 certs.go:226] acquiring lock for ca certs: {Name:mk7f8f83a04be3d39897a025f51d4d8228b5a509 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:00:05.162749   60933 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key
	I1216 21:00:05.162792   60933 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key
	I1216 21:00:05.162803   60933 certs.go:256] generating profile certs ...
	I1216 21:00:05.162907   60933 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/client.key
	I1216 21:00:05.162976   60933 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.key.6c8704df
	I1216 21:00:05.163011   60933 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/proxy-client.key
	I1216 21:00:05.163148   60933 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem (1338 bytes)
	W1216 21:00:05.163176   60933 certs.go:480] ignoring /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254_empty.pem, impossibly tiny 0 bytes
	I1216 21:00:05.163186   60933 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 21:00:05.163210   60933 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem (1082 bytes)
	I1216 21:00:05.163278   60933 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem (1123 bytes)
	I1216 21:00:05.163315   60933 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem (1679 bytes)
	I1216 21:00:05.163379   60933 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem (1708 bytes)
	I1216 21:00:05.164216   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 21:00:05.222491   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 21:00:05.253517   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 21:00:05.294338   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 21:00:05.342850   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1216 21:00:05.388068   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 21:00:05.422591   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 21:00:05.471916   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 21:00:05.505836   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem --> /usr/share/ca-certificates/14254.pem (1338 bytes)
	I1216 21:00:05.539404   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /usr/share/ca-certificates/142542.pem (1708 bytes)
	I1216 21:00:05.570819   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 21:00:05.602079   60933 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 21:00:05.630577   60933 ssh_runner.go:195] Run: openssl version
	I1216 21:00:05.640017   60933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142542.pem && ln -fs /usr/share/ca-certificates/142542.pem /etc/ssl/certs/142542.pem"
	I1216 21:00:05.653759   60933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142542.pem
	I1216 21:00:05.659573   60933 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 19:42 /usr/share/ca-certificates/142542.pem
	I1216 21:00:05.659645   60933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142542.pem
	I1216 21:00:05.666667   60933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142542.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 21:00:05.680061   60933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 21:00:05.692776   60933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:00:05.698644   60933 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:00:05.698728   60933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:00:05.705913   60933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 21:00:05.730062   60933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14254.pem && ln -fs /usr/share/ca-certificates/14254.pem /etc/ssl/certs/14254.pem"
	I1216 21:00:05.750034   60933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14254.pem
	I1216 21:00:05.757158   60933 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 19:42 /usr/share/ca-certificates/14254.pem
	I1216 21:00:05.757252   60933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14254.pem
	I1216 21:00:05.765679   60933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14254.pem /etc/ssl/certs/51391683.0"
	I1216 21:00:05.782537   60933 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 21:00:05.788291   60933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 21:00:05.797707   60933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 21:00:05.807016   60933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 21:00:05.818160   60933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 21:00:05.827428   60933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 21:00:05.836499   60933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 21:00:05.846104   60933 kubeadm.go:392] StartCluster: {Name:old-k8s-version-847766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-847766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 21:00:05.846244   60933 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 21:00:05.846331   60933 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 21:00:05.901274   60933 cri.go:89] found id: ""
	I1216 21:00:05.901376   60933 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 21:00:05.917353   60933 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1216 21:00:05.917381   60933 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1216 21:00:05.917439   60933 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 21:00:05.928587   60933 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 21:00:05.932546   60933 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-847766" does not appear in /home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 21:00:05.933844   60933 kubeconfig.go:62] /home/jenkins/minikube-integration/20091-7083/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-847766" cluster setting kubeconfig missing "old-k8s-version-847766" context setting]
	I1216 21:00:05.935400   60933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/kubeconfig: {Name:mk67073c6dc9abd712825d4490d6430745897f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:00:05.938054   60933 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 21:00:05.950384   60933 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.240
	I1216 21:00:05.950433   60933 kubeadm.go:1160] stopping kube-system containers ...
	I1216 21:00:05.950450   60933 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 21:00:05.950515   60933 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 21:00:05.999495   60933 cri.go:89] found id: ""
	I1216 21:00:05.999588   60933 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 21:00:06.024765   60933 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:00:06.037807   60933 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:00:06.037836   60933 kubeadm.go:157] found existing configuration files:
	
	I1216 21:00:06.037894   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 21:00:06.048926   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:00:06.048997   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:00:06.060167   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 21:00:06.070841   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:00:06.070910   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:00:06.083517   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 21:00:06.099124   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:00:06.099214   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:00:06.110004   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 21:00:06.125600   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:00:06.125668   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:00:06.137212   60933 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 21:00:06.148873   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:06.316611   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:07.220187   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:07.549730   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:07.698864   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:07.815495   60933 api_server.go:52] waiting for apiserver process to appear ...
	I1216 21:00:07.815657   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:08.316003   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:08.816482   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:09.315805   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:05.050699   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:05.051378   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:05.051413   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:05.051296   62108 retry.go:31] will retry after 2.184829789s: waiting for machine to come up
	I1216 21:00:07.237618   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:07.238137   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:07.238166   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:07.238049   62108 retry.go:31] will retry after 2.531717629s: waiting for machine to come up
	I1216 21:00:05.713060   60829 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:05.798544   60829 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:05.798569   60829 pod_ready.go:82] duration metric: took 2.508055323s for pod "kube-controller-manager-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:05.798582   60829 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-mplxr" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:05.805322   60829 pod_ready.go:93] pod "kube-proxy-mplxr" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:05.805361   60829 pod_ready.go:82] duration metric: took 6.77ms for pod "kube-proxy-mplxr" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:05.805399   60829 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:05.812700   60829 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:05.812727   60829 pod_ready.go:82] duration metric: took 7.281992ms for pod "kube-scheduler-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:05.812741   60829 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:07.822004   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:10.321160   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:09.443582   60421 pod_ready.go:103] pod "kube-proxy-5mw2b" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:11.443796   60421 pod_ready.go:103] pod "kube-proxy-5mw2b" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:09.815863   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:10.316664   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:10.815852   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:11.316175   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:11.816446   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:12.316040   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:12.816172   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:13.316460   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:13.815700   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:14.316469   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:09.772318   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:09.772837   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:09.772869   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:09.772797   62108 retry.go:31] will retry after 2.557982234s: waiting for machine to come up
	I1216 21:00:12.331877   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:12.332340   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:12.332368   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:12.332298   62108 retry.go:31] will retry after 4.202991569s: waiting for machine to come up
	I1216 21:00:12.322897   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:14.323015   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:13.942154   60421 pod_ready.go:103] pod "kube-proxy-5mw2b" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:16.442411   60421 pod_ready.go:103] pod "kube-proxy-5mw2b" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:14.816539   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:15.315737   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:15.816465   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:16.316470   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:16.816451   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:17.316485   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:17.816470   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:18.316165   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:18.816448   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:19.315972   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:16.539792   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.540299   60215 main.go:141] libmachine: (embed-certs-606219) Found IP for machine: 192.168.61.151
	I1216 21:00:16.540324   60215 main.go:141] libmachine: (embed-certs-606219) Reserving static IP address...
	I1216 21:00:16.540341   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has current primary IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.540771   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "embed-certs-606219", mac: "52:54:00:63:37:8f", ip: "192.168.61.151"} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:16.540810   60215 main.go:141] libmachine: (embed-certs-606219) DBG | skip adding static IP to network mk-embed-certs-606219 - found existing host DHCP lease matching {name: "embed-certs-606219", mac: "52:54:00:63:37:8f", ip: "192.168.61.151"}
	I1216 21:00:16.540827   60215 main.go:141] libmachine: (embed-certs-606219) Reserved static IP address: 192.168.61.151
	I1216 21:00:16.540839   60215 main.go:141] libmachine: (embed-certs-606219) Waiting for SSH to be available...
	I1216 21:00:16.540847   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Getting to WaitForSSH function...
	I1216 21:00:16.542958   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.543461   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:16.543503   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.543629   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Using SSH client type: external
	I1216 21:00:16.543663   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Using SSH private key: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa (-rw-------)
	I1216 21:00:16.543696   60215 main.go:141] libmachine: (embed-certs-606219) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.151 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1216 21:00:16.543713   60215 main.go:141] libmachine: (embed-certs-606219) DBG | About to run SSH command:
	I1216 21:00:16.543732   60215 main.go:141] libmachine: (embed-certs-606219) DBG | exit 0
	I1216 21:00:16.671576   60215 main.go:141] libmachine: (embed-certs-606219) DBG | SSH cmd err, output: <nil>: 
	I1216 21:00:16.671965   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetConfigRaw
	I1216 21:00:16.672599   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetIP
	I1216 21:00:16.675179   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.675520   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:16.675549   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.675726   60215 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/config.json ...
	I1216 21:00:16.675938   60215 machine.go:93] provisionDockerMachine start ...
	I1216 21:00:16.675955   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:00:16.676186   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:16.678481   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.678824   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:16.678846   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.679020   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:16.679203   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:16.679388   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:16.679530   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:16.679689   60215 main.go:141] libmachine: Using SSH client type: native
	I1216 21:00:16.679883   60215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I1216 21:00:16.679896   60215 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 21:00:16.791925   60215 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1216 21:00:16.791959   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetMachineName
	I1216 21:00:16.792224   60215 buildroot.go:166] provisioning hostname "embed-certs-606219"
	I1216 21:00:16.792261   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetMachineName
	I1216 21:00:16.792492   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:16.794967   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.795359   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:16.795388   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.795496   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:16.795674   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:16.795845   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:16.795995   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:16.796238   60215 main.go:141] libmachine: Using SSH client type: native
	I1216 21:00:16.796466   60215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I1216 21:00:16.796486   60215 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-606219 && echo "embed-certs-606219" | sudo tee /etc/hostname
	I1216 21:00:16.923887   60215 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-606219
	
	I1216 21:00:16.923922   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:16.926689   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.927228   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:16.927283   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.927500   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:16.927724   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:16.927943   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:16.928139   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:16.928396   60215 main.go:141] libmachine: Using SSH client type: native
	I1216 21:00:16.928574   60215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I1216 21:00:16.928590   60215 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-606219' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-606219/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-606219' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 21:00:17.045462   60215 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 21:00:17.045508   60215 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20091-7083/.minikube CaCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20091-7083/.minikube}
	I1216 21:00:17.045540   60215 buildroot.go:174] setting up certificates
	I1216 21:00:17.045560   60215 provision.go:84] configureAuth start
	I1216 21:00:17.045578   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetMachineName
	I1216 21:00:17.045889   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetIP
	I1216 21:00:17.048733   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.049038   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:17.049062   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.049216   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:17.051371   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.051713   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:17.051748   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.051861   60215 provision.go:143] copyHostCerts
	I1216 21:00:17.051940   60215 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem, removing ...
	I1216 21:00:17.051954   60215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem
	I1216 21:00:17.052033   60215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem (1082 bytes)
	I1216 21:00:17.052187   60215 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem, removing ...
	I1216 21:00:17.052203   60215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem
	I1216 21:00:17.052230   60215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem (1123 bytes)
	I1216 21:00:17.052306   60215 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem, removing ...
	I1216 21:00:17.052317   60215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem
	I1216 21:00:17.052342   60215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem (1679 bytes)
	I1216 21:00:17.052413   60215 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem org=jenkins.embed-certs-606219 san=[127.0.0.1 192.168.61.151 embed-certs-606219 localhost minikube]
	I1216 21:00:17.345020   60215 provision.go:177] copyRemoteCerts
	I1216 21:00:17.345079   60215 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 21:00:17.345116   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:17.348019   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.348323   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:17.348350   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.348554   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:17.348783   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:17.348931   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:17.349093   60215 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 21:00:17.434520   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 21:00:17.462097   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1216 21:00:17.488071   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 21:00:17.516428   60215 provision.go:87] duration metric: took 470.851303ms to configureAuth
	I1216 21:00:17.516461   60215 buildroot.go:189] setting minikube options for container-runtime
	I1216 21:00:17.516673   60215 config.go:182] Loaded profile config "embed-certs-606219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 21:00:17.516763   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:17.519637   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.519981   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:17.520019   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.520229   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:17.520451   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:17.520654   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:17.520813   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:17.520977   60215 main.go:141] libmachine: Using SSH client type: native
	I1216 21:00:17.521148   60215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I1216 21:00:17.521166   60215 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 21:00:17.787052   60215 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 21:00:17.787084   60215 machine.go:96] duration metric: took 1.111132885s to provisionDockerMachine
	I1216 21:00:17.787111   60215 start.go:293] postStartSetup for "embed-certs-606219" (driver="kvm2")
	I1216 21:00:17.787126   60215 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 21:00:17.787145   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:00:17.787551   60215 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 21:00:17.787588   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:17.790332   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.790710   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:17.790743   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.790891   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:17.791130   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:17.791336   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:17.791492   60215 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 21:00:17.881548   60215 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 21:00:17.886692   60215 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 21:00:17.886720   60215 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/addons for local assets ...
	I1216 21:00:17.886788   60215 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/files for local assets ...
	I1216 21:00:17.886886   60215 filesync.go:149] local asset: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem -> 142542.pem in /etc/ssl/certs
	I1216 21:00:17.886983   60215 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 21:00:17.897832   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /etc/ssl/certs/142542.pem (1708 bytes)
	I1216 21:00:17.926273   60215 start.go:296] duration metric: took 139.147156ms for postStartSetup
	I1216 21:00:17.926316   60215 fix.go:56] duration metric: took 21.229856025s for fixHost
	I1216 21:00:17.926338   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:17.929204   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.929600   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:17.929623   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.929809   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:17.930036   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:17.930220   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:17.930411   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:17.930554   60215 main.go:141] libmachine: Using SSH client type: native
	I1216 21:00:17.930723   60215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I1216 21:00:17.930734   60215 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 21:00:18.040530   60215 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734382817.988837134
	
	I1216 21:00:18.040557   60215 fix.go:216] guest clock: 1734382817.988837134
	I1216 21:00:18.040590   60215 fix.go:229] Guest: 2024-12-16 21:00:17.988837134 +0000 UTC Remote: 2024-12-16 21:00:17.926320778 +0000 UTC m=+358.266755361 (delta=62.516356ms)
	I1216 21:00:18.040639   60215 fix.go:200] guest clock delta is within tolerance: 62.516356ms
	I1216 21:00:18.040650   60215 start.go:83] releasing machines lock for "embed-certs-606219", held for 21.34422537s
	I1216 21:00:18.040682   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:00:18.040997   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetIP
	I1216 21:00:18.044100   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:18.044549   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:18.044584   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:18.044727   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:00:18.045237   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:00:18.045454   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:00:18.045544   60215 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 21:00:18.045602   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:18.045673   60215 ssh_runner.go:195] Run: cat /version.json
	I1216 21:00:18.045702   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:18.048852   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:18.049066   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:18.049259   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:18.049285   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:18.049423   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:18.049578   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:18.049610   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:18.049611   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:18.049688   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:18.049885   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:18.049908   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:18.050090   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:18.050082   60215 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 21:00:18.050313   60215 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 21:00:18.128381   60215 ssh_runner.go:195] Run: systemctl --version
	I1216 21:00:18.165162   60215 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 21:00:18.313679   60215 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 21:00:18.321330   60215 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 21:00:18.321407   60215 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 21:00:18.340577   60215 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 21:00:18.340601   60215 start.go:495] detecting cgroup driver to use...
	I1216 21:00:18.340672   60215 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 21:00:18.357273   60215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 21:00:18.373169   60215 docker.go:217] disabling cri-docker service (if available) ...
	I1216 21:00:18.373231   60215 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 21:00:18.387904   60215 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 21:00:18.402499   60215 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 21:00:18.528830   60215 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 21:00:18.677746   60215 docker.go:233] disabling docker service ...
	I1216 21:00:18.677839   60215 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 21:00:18.693059   60215 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 21:00:18.707368   60215 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 21:00:18.870936   60215 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 21:00:19.011321   60215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 21:00:19.025645   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 21:00:19.045618   60215 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1216 21:00:19.045695   60215 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:00:19.056739   60215 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 21:00:19.056813   60215 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:00:19.067975   60215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:00:19.078954   60215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:00:19.090165   60215 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 21:00:19.101906   60215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:00:19.112949   60215 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:00:19.131186   60215 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:00:19.142238   60215 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 21:00:19.152768   60215 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 21:00:19.152830   60215 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 21:00:19.169166   60215 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 21:00:19.188991   60215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 21:00:19.319083   60215 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 21:00:19.427266   60215 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 21:00:19.427377   60215 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 21:00:19.432716   60215 start.go:563] Will wait 60s for crictl version
	I1216 21:00:19.432793   60215 ssh_runner.go:195] Run: which crictl
	I1216 21:00:19.437514   60215 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 21:00:19.484613   60215 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 21:00:19.484726   60215 ssh_runner.go:195] Run: crio --version
	I1216 21:00:19.519451   60215 ssh_runner.go:195] Run: crio --version
	I1216 21:00:19.555298   60215 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1216 21:00:19.556696   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetIP
	I1216 21:00:19.559802   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:19.560178   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:19.560201   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:19.560467   60215 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1216 21:00:19.565180   60215 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 21:00:19.579863   60215 kubeadm.go:883] updating cluster {Name:embed-certs-606219 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.32.0 ClusterName:embed-certs-606219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.151 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 21:00:19.579991   60215 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 21:00:19.580037   60215 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 21:00:19.618480   60215 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1216 21:00:19.618556   60215 ssh_runner.go:195] Run: which lz4
	I1216 21:00:19.622839   60215 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 21:00:19.627438   60215 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 21:00:19.627482   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1216 21:00:16.819610   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:19.326427   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:17.942107   60421 pod_ready.go:93] pod "kube-proxy-5mw2b" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:17.942148   60421 pod_ready.go:82] duration metric: took 10.506728599s for pod "kube-proxy-5mw2b" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:17.942161   60421 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:17.948518   60421 pod_ready.go:93] pod "kube-scheduler-no-preload-232338" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:17.948540   60421 pod_ready.go:82] duration metric: took 6.372903ms for pod "kube-scheduler-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:17.948549   60421 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:19.956992   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:21.957271   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:19.815807   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:20.316465   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:20.816461   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:21.316731   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:21.816637   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:22.315727   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:22.816447   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:23.316510   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:23.816408   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:24.316454   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:21.237863   60215 crio.go:462] duration metric: took 1.615059209s to copy over tarball
	I1216 21:00:21.237956   60215 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 21:00:23.572502   60215 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.33450798s)
	I1216 21:00:23.572535   60215 crio.go:469] duration metric: took 2.334633133s to extract the tarball
	I1216 21:00:23.572549   60215 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 21:00:23.613530   60215 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 21:00:23.667777   60215 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 21:00:23.667807   60215 cache_images.go:84] Images are preloaded, skipping loading
	I1216 21:00:23.667815   60215 kubeadm.go:934] updating node { 192.168.61.151 8443 v1.32.0 crio true true} ...
	I1216 21:00:23.667929   60215 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-606219 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.151
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:embed-certs-606219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 21:00:23.668009   60215 ssh_runner.go:195] Run: crio config
	I1216 21:00:23.716162   60215 cni.go:84] Creating CNI manager for ""
	I1216 21:00:23.716184   60215 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:00:23.716192   60215 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 21:00:23.716211   60215 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.151 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-606219 NodeName:embed-certs-606219 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.151"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.151 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 21:00:23.716337   60215 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.151
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-606219"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.151"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.151"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 21:00:23.716393   60215 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1216 21:00:23.727236   60215 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 21:00:23.727337   60215 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 21:00:23.737632   60215 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1216 21:00:23.757380   60215 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 21:00:23.774863   60215 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I1216 21:00:23.795070   60215 ssh_runner.go:195] Run: grep 192.168.61.151	control-plane.minikube.internal$ /etc/hosts
	I1216 21:00:23.799453   60215 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.151	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 21:00:23.814278   60215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 21:00:23.962200   60215 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 21:00:23.981947   60215 certs.go:68] Setting up /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219 for IP: 192.168.61.151
	I1216 21:00:23.981976   60215 certs.go:194] generating shared ca certs ...
	I1216 21:00:23.981999   60215 certs.go:226] acquiring lock for ca certs: {Name:mk7f8f83a04be3d39897a025f51d4d8228b5a509 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:00:23.982156   60215 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key
	I1216 21:00:23.982197   60215 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key
	I1216 21:00:23.982204   60215 certs.go:256] generating profile certs ...
	I1216 21:00:23.982280   60215 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/client.key
	I1216 21:00:23.982336   60215 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/apiserver.key.b346be49
	I1216 21:00:23.982376   60215 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/proxy-client.key
	I1216 21:00:23.982483   60215 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem (1338 bytes)
	W1216 21:00:23.982513   60215 certs.go:480] ignoring /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254_empty.pem, impossibly tiny 0 bytes
	I1216 21:00:23.982523   60215 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 21:00:23.982555   60215 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem (1082 bytes)
	I1216 21:00:23.982582   60215 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem (1123 bytes)
	I1216 21:00:23.982602   60215 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem (1679 bytes)
	I1216 21:00:23.982655   60215 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem (1708 bytes)
	I1216 21:00:23.983524   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 21:00:24.015369   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 21:00:24.043889   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 21:00:24.087807   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 21:00:24.137438   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1216 21:00:24.174859   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 21:00:24.200220   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 21:00:24.225811   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 21:00:24.251567   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /usr/share/ca-certificates/142542.pem (1708 bytes)
	I1216 21:00:24.276737   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 21:00:24.302541   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem --> /usr/share/ca-certificates/14254.pem (1338 bytes)
	I1216 21:00:24.329876   60215 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 21:00:24.350133   60215 ssh_runner.go:195] Run: openssl version
	I1216 21:00:24.356984   60215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142542.pem && ln -fs /usr/share/ca-certificates/142542.pem /etc/ssl/certs/142542.pem"
	I1216 21:00:24.371219   60215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142542.pem
	I1216 21:00:24.376759   60215 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 19:42 /usr/share/ca-certificates/142542.pem
	I1216 21:00:24.376816   60215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142542.pem
	I1216 21:00:24.383725   60215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142542.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 21:00:24.397759   60215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 21:00:24.409836   60215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:00:24.414765   60215 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:00:24.414836   60215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:00:24.421662   60215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 21:00:24.433843   60215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14254.pem && ln -fs /usr/share/ca-certificates/14254.pem /etc/ssl/certs/14254.pem"
	I1216 21:00:24.447839   60215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14254.pem
	I1216 21:00:24.453107   60215 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 19:42 /usr/share/ca-certificates/14254.pem
	I1216 21:00:24.453185   60215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14254.pem
	I1216 21:00:24.459472   60215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14254.pem /etc/ssl/certs/51391683.0"
	I1216 21:00:24.471714   60215 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 21:00:24.476881   60215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 21:00:24.486263   60215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 21:00:24.493146   60215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 21:00:24.500093   60215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 21:00:24.506599   60215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 21:00:24.512946   60215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 21:00:24.519699   60215 kubeadm.go:392] StartCluster: {Name:embed-certs-606219 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32
.0 ClusterName:embed-certs-606219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.151 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 21:00:24.519780   60215 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 21:00:24.519861   60215 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 21:00:24.570867   60215 cri.go:89] found id: ""
	I1216 21:00:24.570952   60215 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 21:00:24.583857   60215 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1216 21:00:24.583887   60215 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1216 21:00:24.583943   60215 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 21:00:24.595709   60215 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 21:00:24.596734   60215 kubeconfig.go:125] found "embed-certs-606219" server: "https://192.168.61.151:8443"
	I1216 21:00:24.598569   60215 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 21:00:24.609876   60215 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.151
	I1216 21:00:24.609905   60215 kubeadm.go:1160] stopping kube-system containers ...
	I1216 21:00:24.609917   60215 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 21:00:24.609964   60215 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 21:00:24.654487   60215 cri.go:89] found id: ""
	I1216 21:00:24.654567   60215 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 21:00:24.676658   60215 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:00:24.689546   60215 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:00:24.689571   60215 kubeadm.go:157] found existing configuration files:
	
	I1216 21:00:24.689615   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 21:00:21.819876   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:23.820061   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:23.957368   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:26.556301   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:24.816467   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:25.315789   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:25.816410   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:26.316537   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:26.816144   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:27.316659   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:27.816126   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:28.316568   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:28.816151   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:29.316485   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:24.700928   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:00:24.701012   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:00:24.713438   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 21:00:24.725184   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:00:24.725257   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:00:24.737483   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 21:00:24.749488   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:00:24.749546   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:00:24.762322   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 21:00:24.774309   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:00:24.774391   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:00:24.787008   60215 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 21:00:24.798394   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:25.009799   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:25.917432   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:26.175602   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:26.279646   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:26.362472   60215 api_server.go:52] waiting for apiserver process to appear ...
	I1216 21:00:26.362564   60215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:26.862646   60215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:27.362663   60215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:27.421335   60215 api_server.go:72] duration metric: took 1.058863872s to wait for apiserver process to appear ...
	I1216 21:00:27.421361   60215 api_server.go:88] waiting for apiserver healthz status ...
	I1216 21:00:27.421380   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:00:27.421869   60215 api_server.go:269] stopped: https://192.168.61.151:8443/healthz: Get "https://192.168.61.151:8443/healthz": dial tcp 192.168.61.151:8443: connect: connection refused
	I1216 21:00:27.921493   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:00:26.471175   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:28.819200   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:30.365380   60215 api_server.go:279] https://192.168.61.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 21:00:30.365410   60215 api_server.go:103] status: https://192.168.61.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 21:00:30.365425   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:00:30.416044   60215 api_server.go:279] https://192.168.61.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 21:00:30.416078   60215 api_server.go:103] status: https://192.168.61.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 21:00:30.422219   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:00:30.432135   60215 api_server.go:279] https://192.168.61.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 21:00:30.432161   60215 api_server.go:103] status: https://192.168.61.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 21:00:30.921790   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:00:30.929160   60215 api_server.go:279] https://192.168.61.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 21:00:30.929192   60215 api_server.go:103] status: https://192.168.61.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 21:00:31.421708   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:00:31.432805   60215 api_server.go:279] https://192.168.61.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 21:00:31.432839   60215 api_server.go:103] status: https://192.168.61.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 21:00:31.922000   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:00:31.933658   60215 api_server.go:279] https://192.168.61.151:8443/healthz returned 200:
	ok
	I1216 21:00:31.945496   60215 api_server.go:141] control plane version: v1.32.0
	I1216 21:00:31.945534   60215 api_server.go:131] duration metric: took 4.524165612s to wait for apiserver health ...
	I1216 21:00:31.945546   60215 cni.go:84] Creating CNI manager for ""
	I1216 21:00:31.945555   60215 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:00:31.947456   60215 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 21:00:28.954572   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:30.955397   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:29.816510   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:30.315756   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:30.815774   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:31.316516   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:31.816503   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:32.316499   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:32.816455   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:33.316478   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:33.816363   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:34.316057   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:31.948727   60215 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 21:00:31.977877   60215 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 21:00:32.014745   60215 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 21:00:32.027268   60215 system_pods.go:59] 8 kube-system pods found
	I1216 21:00:32.027303   60215 system_pods.go:61] "coredns-668d6bf9bc-rp29f" [0135dcef-2324-49ec-b459-f34b73efd82b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:00:32.027311   60215 system_pods.go:61] "etcd-embed-certs-606219" [05f01ef3-5d92-4d16-9643-0f56df3869f6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 21:00:32.027320   60215 system_pods.go:61] "kube-apiserver-embed-certs-606219" [4294c469-e47a-4722-a620-92c33d23b41e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 21:00:32.027326   60215 system_pods.go:61] "kube-controller-manager-embed-certs-606219" [cc8452e6-ca00-44dd-8d77-897df20d37f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 21:00:32.027354   60215 system_pods.go:61] "kube-proxy-8t495" [492be5cc-7d3a-4983-9bc7-14091bef7b43] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 21:00:32.027362   60215 system_pods.go:61] "kube-scheduler-embed-certs-606219" [63c42d73-a17a-4b37-a585-f7db5923c493] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 21:00:32.027376   60215 system_pods.go:61] "metrics-server-f79f97bbb-d6gmd" [50916d48-ee33-4e96-9507-c486d8ac7f7d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:00:32.027387   60215 system_pods.go:61] "storage-provisioner" [1164651f-c3b5-445f-882a-60eb2f2fb3f8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 21:00:32.027399   60215 system_pods.go:74] duration metric: took 12.633182ms to wait for pod list to return data ...
	I1216 21:00:32.027409   60215 node_conditions.go:102] verifying NodePressure condition ...
	I1216 21:00:32.041648   60215 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 21:00:32.041677   60215 node_conditions.go:123] node cpu capacity is 2
	I1216 21:00:32.041686   60215 node_conditions.go:105] duration metric: took 14.27317ms to run NodePressure ...
	I1216 21:00:32.041704   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:32.492472   60215 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1216 21:00:32.504237   60215 kubeadm.go:739] kubelet initialised
	I1216 21:00:32.504271   60215 kubeadm.go:740] duration metric: took 11.772175ms waiting for restarted kubelet to initialise ...
	I1216 21:00:32.504282   60215 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:00:32.525531   60215 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:34.531954   60215 pod_ready.go:103] pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:31.321998   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:33.325288   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:32.959143   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:35.454928   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:37.455474   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:34.815839   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:35.316503   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:35.816590   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:36.316231   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:36.816011   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:37.316485   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:37.816494   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:38.316486   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:38.816475   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:39.315762   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:36.534516   60215 pod_ready.go:103] pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:39.032255   60215 pod_ready.go:103] pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:35.819575   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:38.322139   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:40.322804   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:39.456089   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:41.955128   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:39.816009   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:40.316444   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:40.816493   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:41.315869   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:41.816495   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:42.316034   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:42.816422   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:43.316432   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:43.815875   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:44.316036   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:41.032545   60215 pod_ready.go:103] pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:43.534471   60215 pod_ready.go:103] pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:42.819610   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:44.820561   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:43.955190   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:46.455540   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:44.816293   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:45.316458   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:45.815992   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:46.316054   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:46.816449   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:47.316113   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:47.816514   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:48.316353   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:48.816144   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:49.316435   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:45.031682   60215 pod_ready.go:93] pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:45.031705   60215 pod_ready.go:82] duration metric: took 12.506146086s for pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.031715   60215 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.038109   60215 pod_ready.go:93] pod "etcd-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:45.038138   60215 pod_ready.go:82] duration metric: took 6.416609ms for pod "etcd-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.038149   60215 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.043764   60215 pod_ready.go:93] pod "kube-apiserver-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:45.043784   60215 pod_ready.go:82] duration metric: took 5.621982ms for pod "kube-apiserver-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.043793   60215 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.053376   60215 pod_ready.go:93] pod "kube-controller-manager-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:45.053399   60215 pod_ready.go:82] duration metric: took 9.600142ms for pod "kube-controller-manager-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.053409   60215 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-8t495" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.058956   60215 pod_ready.go:93] pod "kube-proxy-8t495" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:45.058976   60215 pod_ready.go:82] duration metric: took 5.561188ms for pod "kube-proxy-8t495" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.058984   60215 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.429908   60215 pod_ready.go:93] pod "kube-scheduler-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:45.429932   60215 pod_ready.go:82] duration metric: took 370.942192ms for pod "kube-scheduler-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.429942   60215 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:47.438759   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:47.323605   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:49.819763   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:48.456270   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:50.955190   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:49.815935   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:50.316437   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:50.816335   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:51.315747   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:51.816504   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:52.315695   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:52.816115   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:53.316498   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:53.816529   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:54.315689   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:49.935961   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:51.937245   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:53.937302   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:51.820266   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:53.820748   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:52.956645   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:55.456064   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:54.816019   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:55.316484   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:55.816517   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:56.315858   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:56.816306   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:57.316447   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:57.815879   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:58.316493   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:58.816395   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:59.316225   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:56.437390   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:58.938617   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:56.323619   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:58.820330   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:57.956401   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:00.456844   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:02.457677   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:59.816440   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:00.315769   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:00.816285   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:01.316020   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:01.818175   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:02.315780   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:02.816411   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:03.315758   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:03.815810   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:04.316731   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:01.436856   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:03.436945   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:00.820484   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:03.323328   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:04.955714   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:07.455361   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:04.816470   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:05.316528   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:05.815792   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:06.316491   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:06.815977   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:07.316002   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:07.816043   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:07.816114   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:07.861866   60933 cri.go:89] found id: ""
	I1216 21:01:07.861896   60933 logs.go:282] 0 containers: []
	W1216 21:01:07.861906   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:07.861913   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:07.861978   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:07.905674   60933 cri.go:89] found id: ""
	I1216 21:01:07.905700   60933 logs.go:282] 0 containers: []
	W1216 21:01:07.905707   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:07.905713   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:07.905798   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:07.949936   60933 cri.go:89] found id: ""
	I1216 21:01:07.949966   60933 logs.go:282] 0 containers: []
	W1216 21:01:07.949977   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:07.949984   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:07.950048   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:07.987196   60933 cri.go:89] found id: ""
	I1216 21:01:07.987223   60933 logs.go:282] 0 containers: []
	W1216 21:01:07.987232   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:07.987237   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:07.987341   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:08.033126   60933 cri.go:89] found id: ""
	I1216 21:01:08.033156   60933 logs.go:282] 0 containers: []
	W1216 21:01:08.033168   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:08.033176   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:08.033252   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:08.072223   60933 cri.go:89] found id: ""
	I1216 21:01:08.072257   60933 logs.go:282] 0 containers: []
	W1216 21:01:08.072270   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:08.072278   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:08.072345   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:08.117257   60933 cri.go:89] found id: ""
	I1216 21:01:08.117288   60933 logs.go:282] 0 containers: []
	W1216 21:01:08.117299   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:08.117319   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:08.117389   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:08.158059   60933 cri.go:89] found id: ""
	I1216 21:01:08.158096   60933 logs.go:282] 0 containers: []
	W1216 21:01:08.158106   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:08.158119   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:08.158133   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:08.232930   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:08.232966   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:08.277173   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:08.277204   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:08.331763   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:08.331802   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:08.346150   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:08.346178   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:08.488668   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:05.437627   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:07.938294   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:05.820491   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:07.821058   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:10.322630   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:09.456101   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:11.461923   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:10.989383   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:11.003162   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:11.003266   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:11.040432   60933 cri.go:89] found id: ""
	I1216 21:01:11.040464   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.040475   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:11.040483   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:11.040547   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:11.083083   60933 cri.go:89] found id: ""
	I1216 21:01:11.083110   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.083117   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:11.083122   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:11.083182   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:11.122842   60933 cri.go:89] found id: ""
	I1216 21:01:11.122880   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.122893   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:11.122900   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:11.122969   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:11.168227   60933 cri.go:89] found id: ""
	I1216 21:01:11.168268   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.168279   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:11.168286   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:11.168359   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:11.218660   60933 cri.go:89] found id: ""
	I1216 21:01:11.218689   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.218701   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:11.218708   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:11.218774   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:11.281179   60933 cri.go:89] found id: ""
	I1216 21:01:11.281214   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.281227   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:11.281236   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:11.281315   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:11.326419   60933 cri.go:89] found id: ""
	I1216 21:01:11.326453   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.326464   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:11.326472   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:11.326535   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:11.368825   60933 cri.go:89] found id: ""
	I1216 21:01:11.368863   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.368875   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:11.368887   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:11.368905   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:11.454848   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:11.454869   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:11.454888   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:11.541685   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:11.541724   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:11.581804   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:11.581830   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:11.635800   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:11.635838   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:14.152441   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:14.167637   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:14.167720   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:14.206685   60933 cri.go:89] found id: ""
	I1216 21:01:14.206716   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.206728   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:14.206735   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:14.206796   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:14.248126   60933 cri.go:89] found id: ""
	I1216 21:01:14.248151   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.248159   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:14.248165   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:14.248215   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:14.285030   60933 cri.go:89] found id: ""
	I1216 21:01:14.285067   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.285079   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:14.285086   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:14.285151   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:14.325706   60933 cri.go:89] found id: ""
	I1216 21:01:14.325736   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.325747   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:14.325755   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:14.325820   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:14.369447   60933 cri.go:89] found id: ""
	I1216 21:01:14.369475   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.369486   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:14.369494   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:14.369557   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:10.437872   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:12.937013   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:12.820480   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:15.319910   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:13.959919   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:16.458101   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:14.407792   60933 cri.go:89] found id: ""
	I1216 21:01:14.407818   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.407826   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:14.407832   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:14.407890   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:14.448380   60933 cri.go:89] found id: ""
	I1216 21:01:14.448411   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.448419   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:14.448424   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:14.448473   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:14.487116   60933 cri.go:89] found id: ""
	I1216 21:01:14.487144   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.487154   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:14.487164   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:14.487177   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:14.547342   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:14.547390   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:14.563385   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:14.563424   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:14.637363   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:14.637394   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:14.637410   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:14.715586   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:14.715626   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:17.258974   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:17.273896   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:17.273970   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:17.317359   60933 cri.go:89] found id: ""
	I1216 21:01:17.317394   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.317405   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:17.317412   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:17.317476   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:17.361422   60933 cri.go:89] found id: ""
	I1216 21:01:17.361451   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.361462   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:17.361469   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:17.361568   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:17.401466   60933 cri.go:89] found id: ""
	I1216 21:01:17.401522   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.401534   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:17.401544   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:17.401614   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:17.439560   60933 cri.go:89] found id: ""
	I1216 21:01:17.439588   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.439597   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:17.439603   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:17.439655   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:17.480310   60933 cri.go:89] found id: ""
	I1216 21:01:17.480333   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.480340   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:17.480345   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:17.480393   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:17.528562   60933 cri.go:89] found id: ""
	I1216 21:01:17.528589   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.528600   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:17.528607   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:17.528671   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:17.569863   60933 cri.go:89] found id: ""
	I1216 21:01:17.569900   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.569908   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:17.569914   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:17.569975   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:17.610840   60933 cri.go:89] found id: ""
	I1216 21:01:17.610867   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.610875   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:17.610884   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:17.610895   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:17.661002   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:17.661041   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:17.675290   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:17.675318   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:17.743550   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:17.743572   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:17.743584   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:17.824479   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:17.824524   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:15.437260   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:17.937487   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:17.324337   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:19.819325   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:18.956605   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:20.957030   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:20.373687   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:20.389149   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:20.389244   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:20.429594   60933 cri.go:89] found id: ""
	I1216 21:01:20.429626   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.429634   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:20.429639   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:20.429693   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:20.473157   60933 cri.go:89] found id: ""
	I1216 21:01:20.473185   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.473193   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:20.473198   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:20.473264   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:20.512549   60933 cri.go:89] found id: ""
	I1216 21:01:20.512586   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.512597   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:20.512604   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:20.512676   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:20.549275   60933 cri.go:89] found id: ""
	I1216 21:01:20.549310   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.549323   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:20.549344   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:20.549408   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:20.587405   60933 cri.go:89] found id: ""
	I1216 21:01:20.587435   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.587443   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:20.587449   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:20.587515   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:20.625364   60933 cri.go:89] found id: ""
	I1216 21:01:20.625393   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.625400   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:20.625406   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:20.625456   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:20.664018   60933 cri.go:89] found id: ""
	I1216 21:01:20.664050   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.664061   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:20.664068   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:20.664117   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:20.703860   60933 cri.go:89] found id: ""
	I1216 21:01:20.703890   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.703898   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:20.703906   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:20.703918   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:20.754433   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:20.754470   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:20.770136   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:20.770172   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:20.854025   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:20.854049   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:20.854061   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:20.939628   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:20.939661   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:23.489645   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:23.503603   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:23.503667   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:23.543044   60933 cri.go:89] found id: ""
	I1216 21:01:23.543070   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.543077   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:23.543083   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:23.543131   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:23.580333   60933 cri.go:89] found id: ""
	I1216 21:01:23.580362   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.580371   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:23.580377   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:23.580428   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:23.616732   60933 cri.go:89] found id: ""
	I1216 21:01:23.616766   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.616778   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:23.616785   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:23.616834   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:23.655771   60933 cri.go:89] found id: ""
	I1216 21:01:23.655793   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.655801   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:23.655807   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:23.655861   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:23.694400   60933 cri.go:89] found id: ""
	I1216 21:01:23.694430   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.694437   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:23.694443   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:23.694500   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:23.732592   60933 cri.go:89] found id: ""
	I1216 21:01:23.732622   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.732630   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:23.732636   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:23.732688   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:23.769752   60933 cri.go:89] found id: ""
	I1216 21:01:23.769787   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.769801   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:23.769810   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:23.769892   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:23.806891   60933 cri.go:89] found id: ""
	I1216 21:01:23.806925   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.806936   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:23.806947   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:23.806963   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:23.822887   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:23.822912   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:23.898795   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:23.898817   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:23.898830   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:23.978036   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:23.978073   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:24.032500   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:24.032528   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:20.437888   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:22.936895   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:21.819859   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:23.820383   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:23.456331   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:25.960513   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:26.585937   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:26.599470   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:26.599543   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:26.635421   60933 cri.go:89] found id: ""
	I1216 21:01:26.635446   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.635455   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:26.635461   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:26.635527   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:26.675347   60933 cri.go:89] found id: ""
	I1216 21:01:26.675379   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.675390   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:26.675397   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:26.675464   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:26.715444   60933 cri.go:89] found id: ""
	I1216 21:01:26.715469   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.715480   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:26.715541   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:26.715619   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:26.753841   60933 cri.go:89] found id: ""
	I1216 21:01:26.753874   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.753893   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:26.753901   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:26.753963   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:26.791427   60933 cri.go:89] found id: ""
	I1216 21:01:26.791453   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.791464   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:26.791473   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:26.791539   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:26.832772   60933 cri.go:89] found id: ""
	I1216 21:01:26.832804   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.832816   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:26.832823   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:26.832887   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:26.869963   60933 cri.go:89] found id: ""
	I1216 21:01:26.869990   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.869997   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:26.870003   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:26.870068   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:26.906792   60933 cri.go:89] found id: ""
	I1216 21:01:26.906821   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.906862   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:26.906875   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:26.906894   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:26.994820   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:26.994863   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:27.034642   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:27.034686   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:27.089128   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:27.089168   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:27.104368   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:27.104401   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:27.179852   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:25.436696   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:27.937229   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:26.319568   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:28.820132   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:28.454880   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:30.455734   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:29.681052   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:29.695376   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:29.695464   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:29.735562   60933 cri.go:89] found id: ""
	I1216 21:01:29.735588   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.735596   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:29.735602   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:29.735650   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:29.772635   60933 cri.go:89] found id: ""
	I1216 21:01:29.772663   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.772672   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:29.772678   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:29.772737   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:29.810471   60933 cri.go:89] found id: ""
	I1216 21:01:29.810499   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.810509   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:29.810516   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:29.810575   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:29.845917   60933 cri.go:89] found id: ""
	I1216 21:01:29.845952   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.845966   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:29.845975   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:29.846048   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:29.883866   60933 cri.go:89] found id: ""
	I1216 21:01:29.883892   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.883900   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:29.883906   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:29.883968   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:29.920696   60933 cri.go:89] found id: ""
	I1216 21:01:29.920729   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.920740   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:29.920748   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:29.920831   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:29.957977   60933 cri.go:89] found id: ""
	I1216 21:01:29.958056   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.958069   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:29.958079   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:29.958144   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:29.995436   60933 cri.go:89] found id: ""
	I1216 21:01:29.995464   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.995472   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:29.995481   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:29.995492   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:30.046819   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:30.046859   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:30.062754   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:30.062807   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:30.138932   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:30.138959   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:30.138975   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:30.225720   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:30.225768   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:32.768185   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:32.782642   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:32.782729   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:32.821995   60933 cri.go:89] found id: ""
	I1216 21:01:32.822029   60933 logs.go:282] 0 containers: []
	W1216 21:01:32.822040   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:32.822048   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:32.822112   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:32.858453   60933 cri.go:89] found id: ""
	I1216 21:01:32.858487   60933 logs.go:282] 0 containers: []
	W1216 21:01:32.858497   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:32.858504   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:32.858570   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:32.896269   60933 cri.go:89] found id: ""
	I1216 21:01:32.896304   60933 logs.go:282] 0 containers: []
	W1216 21:01:32.896316   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:32.896323   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:32.896384   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:32.936795   60933 cri.go:89] found id: ""
	I1216 21:01:32.936820   60933 logs.go:282] 0 containers: []
	W1216 21:01:32.936832   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:32.936838   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:32.936904   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:32.974779   60933 cri.go:89] found id: ""
	I1216 21:01:32.974810   60933 logs.go:282] 0 containers: []
	W1216 21:01:32.974821   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:32.974828   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:32.974892   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:33.012201   60933 cri.go:89] found id: ""
	I1216 21:01:33.012226   60933 logs.go:282] 0 containers: []
	W1216 21:01:33.012234   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:33.012239   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:33.012287   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:33.049777   60933 cri.go:89] found id: ""
	I1216 21:01:33.049803   60933 logs.go:282] 0 containers: []
	W1216 21:01:33.049811   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:33.049816   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:33.049873   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:33.087820   60933 cri.go:89] found id: ""
	I1216 21:01:33.087851   60933 logs.go:282] 0 containers: []
	W1216 21:01:33.087859   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:33.087870   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:33.087885   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:33.140816   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:33.140854   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:33.154817   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:33.154855   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:33.231445   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:33.231474   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:33.231496   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:33.311547   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:33.311586   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:29.938045   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:32.436934   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:34.444209   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:31.321180   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:33.324091   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:32.956028   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:35.454994   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:37.455094   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:35.855686   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:35.870404   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:35.870485   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:35.908175   60933 cri.go:89] found id: ""
	I1216 21:01:35.908204   60933 logs.go:282] 0 containers: []
	W1216 21:01:35.908215   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:35.908222   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:35.908284   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:35.955456   60933 cri.go:89] found id: ""
	I1216 21:01:35.955483   60933 logs.go:282] 0 containers: []
	W1216 21:01:35.955494   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:35.955501   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:35.955562   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:35.995170   60933 cri.go:89] found id: ""
	I1216 21:01:35.995201   60933 logs.go:282] 0 containers: []
	W1216 21:01:35.995211   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:35.995218   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:35.995305   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:36.033729   60933 cri.go:89] found id: ""
	I1216 21:01:36.033758   60933 logs.go:282] 0 containers: []
	W1216 21:01:36.033769   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:36.033776   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:36.033840   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:36.072756   60933 cri.go:89] found id: ""
	I1216 21:01:36.072787   60933 logs.go:282] 0 containers: []
	W1216 21:01:36.072799   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:36.072806   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:36.072873   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:36.112149   60933 cri.go:89] found id: ""
	I1216 21:01:36.112187   60933 logs.go:282] 0 containers: []
	W1216 21:01:36.112198   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:36.112205   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:36.112258   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:36.148742   60933 cri.go:89] found id: ""
	I1216 21:01:36.148770   60933 logs.go:282] 0 containers: []
	W1216 21:01:36.148781   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:36.148789   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:36.148855   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:36.192827   60933 cri.go:89] found id: ""
	I1216 21:01:36.192864   60933 logs.go:282] 0 containers: []
	W1216 21:01:36.192875   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:36.192886   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:36.192901   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:36.243822   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:36.243867   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:36.258258   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:36.258292   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:36.342847   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:36.342876   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:36.342891   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:36.424741   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:36.424780   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:38.967334   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:38.982208   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:38.982283   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:39.023903   60933 cri.go:89] found id: ""
	I1216 21:01:39.023931   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.023939   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:39.023945   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:39.023997   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:39.070314   60933 cri.go:89] found id: ""
	I1216 21:01:39.070342   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.070351   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:39.070359   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:39.070423   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:39.115081   60933 cri.go:89] found id: ""
	I1216 21:01:39.115106   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.115113   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:39.115119   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:39.115178   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:39.151933   60933 cri.go:89] found id: ""
	I1216 21:01:39.151959   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.151967   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:39.151972   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:39.152022   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:39.192280   60933 cri.go:89] found id: ""
	I1216 21:01:39.192307   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.192315   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:39.192322   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:39.192370   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:39.228792   60933 cri.go:89] found id: ""
	I1216 21:01:39.228814   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.228822   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:39.228827   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:39.228887   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:39.266823   60933 cri.go:89] found id: ""
	I1216 21:01:39.266847   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.266854   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:39.266860   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:39.266908   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:39.301317   60933 cri.go:89] found id: ""
	I1216 21:01:39.301340   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.301348   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:39.301361   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:39.301372   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:39.386615   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:39.386663   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:36.936376   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:38.936968   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:35.820025   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:37.820396   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:40.319915   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:39.457790   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:41.955758   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:39.433079   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:39.433112   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:39.489422   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:39.489458   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:39.504223   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:39.504259   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:39.587898   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:42.088900   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:42.103768   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:42.103854   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:42.141956   60933 cri.go:89] found id: ""
	I1216 21:01:42.142026   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.142040   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:42.142049   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:42.142117   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:42.178754   60933 cri.go:89] found id: ""
	I1216 21:01:42.178782   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.178818   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:42.178833   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:42.178891   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:42.215872   60933 cri.go:89] found id: ""
	I1216 21:01:42.215905   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.215916   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:42.215923   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:42.215991   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:42.253854   60933 cri.go:89] found id: ""
	I1216 21:01:42.253885   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.253896   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:42.253904   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:42.253972   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:42.290963   60933 cri.go:89] found id: ""
	I1216 21:01:42.291008   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.291023   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:42.291039   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:42.291109   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:42.332920   60933 cri.go:89] found id: ""
	I1216 21:01:42.332946   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.332953   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:42.332959   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:42.333006   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:42.375060   60933 cri.go:89] found id: ""
	I1216 21:01:42.375093   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.375104   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:42.375112   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:42.375189   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:42.416593   60933 cri.go:89] found id: ""
	I1216 21:01:42.416621   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.416631   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:42.416639   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:42.416651   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:42.475204   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:42.475260   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:42.491022   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:42.491057   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:42.566645   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:42.566672   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:42.566687   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:42.646815   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:42.646856   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:41.436872   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:43.936734   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:42.321709   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:44.321985   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:43.955807   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:46.455508   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:45.191912   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:45.205487   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:45.205548   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:45.245350   60933 cri.go:89] found id: ""
	I1216 21:01:45.245389   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.245397   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:45.245404   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:45.245482   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:45.302126   60933 cri.go:89] found id: ""
	I1216 21:01:45.302158   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.302171   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:45.302178   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:45.302251   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:45.342888   60933 cri.go:89] found id: ""
	I1216 21:01:45.342917   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.342932   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:45.342937   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:45.342990   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:45.381545   60933 cri.go:89] found id: ""
	I1216 21:01:45.381574   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.381585   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:45.381593   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:45.381652   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:45.418081   60933 cri.go:89] found id: ""
	I1216 21:01:45.418118   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.418131   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:45.418138   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:45.418207   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:45.458610   60933 cri.go:89] found id: ""
	I1216 21:01:45.458637   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.458647   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:45.458655   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:45.458713   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:45.500102   60933 cri.go:89] found id: ""
	I1216 21:01:45.500137   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.500148   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:45.500155   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:45.500217   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:45.542074   60933 cri.go:89] found id: ""
	I1216 21:01:45.542103   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.542113   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:45.542122   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:45.542134   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:45.597577   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:45.597614   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:45.614028   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:45.614075   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:45.693014   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:45.693039   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:45.693056   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:45.772260   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:45.772295   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:48.317073   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:48.332176   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:48.332242   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:48.369946   60933 cri.go:89] found id: ""
	I1216 21:01:48.369976   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.369988   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:48.369994   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:48.370059   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:48.407628   60933 cri.go:89] found id: ""
	I1216 21:01:48.407661   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.407672   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:48.407680   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:48.407742   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:48.444377   60933 cri.go:89] found id: ""
	I1216 21:01:48.444403   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.444411   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:48.444416   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:48.444467   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:48.485674   60933 cri.go:89] found id: ""
	I1216 21:01:48.485710   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.485722   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:48.485730   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:48.485785   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:48.530577   60933 cri.go:89] found id: ""
	I1216 21:01:48.530610   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.530621   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:48.530628   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:48.530693   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:48.567128   60933 cri.go:89] found id: ""
	I1216 21:01:48.567151   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.567159   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:48.567165   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:48.567216   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:48.603294   60933 cri.go:89] found id: ""
	I1216 21:01:48.603320   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.603327   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:48.603333   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:48.603392   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:48.646221   60933 cri.go:89] found id: ""
	I1216 21:01:48.646253   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.646265   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:48.646288   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:48.646318   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:48.697589   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:48.697624   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:48.711916   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:48.711947   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:48.789068   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:48.789097   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:48.789113   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:48.872340   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:48.872378   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:45.937806   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:48.437160   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:46.819986   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:48.821079   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:48.456975   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:50.956101   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:51.418176   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:51.434851   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:51.434948   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:51.478935   60933 cri.go:89] found id: ""
	I1216 21:01:51.478963   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.478975   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:51.478982   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:51.479043   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:51.524581   60933 cri.go:89] found id: ""
	I1216 21:01:51.524611   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.524622   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:51.524629   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:51.524686   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:51.563479   60933 cri.go:89] found id: ""
	I1216 21:01:51.563507   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.563516   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:51.563521   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:51.563578   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:51.601931   60933 cri.go:89] found id: ""
	I1216 21:01:51.601964   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.601975   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:51.601982   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:51.602044   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:51.638984   60933 cri.go:89] found id: ""
	I1216 21:01:51.639014   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.639025   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:51.639032   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:51.639093   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:51.681137   60933 cri.go:89] found id: ""
	I1216 21:01:51.681167   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.681178   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:51.681185   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:51.681263   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:51.722904   60933 cri.go:89] found id: ""
	I1216 21:01:51.722932   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.722941   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:51.722946   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:51.722994   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:51.794403   60933 cri.go:89] found id: ""
	I1216 21:01:51.794434   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.794444   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:51.794453   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:51.794464   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:51.850688   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:51.850724   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:51.866049   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:51.866079   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:51.949844   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:51.949880   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:51.949894   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:52.028981   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:52.029023   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:50.936202   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:52.936839   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:51.321959   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:53.819864   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:53.455360   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:55.954957   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:54.570192   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:54.585405   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:54.585489   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:54.627670   60933 cri.go:89] found id: ""
	I1216 21:01:54.627701   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.627712   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:54.627719   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:54.627782   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:54.671226   60933 cri.go:89] found id: ""
	I1216 21:01:54.671265   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.671276   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:54.671283   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:54.671337   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:54.705549   60933 cri.go:89] found id: ""
	I1216 21:01:54.705581   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.705592   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:54.705600   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:54.705663   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:54.743638   60933 cri.go:89] found id: ""
	I1216 21:01:54.743664   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.743671   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:54.743677   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:54.743728   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:54.781714   60933 cri.go:89] found id: ""
	I1216 21:01:54.781750   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.781760   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:54.781767   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:54.781831   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:54.830808   60933 cri.go:89] found id: ""
	I1216 21:01:54.830840   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.830851   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:54.830859   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:54.830923   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:54.868539   60933 cri.go:89] found id: ""
	I1216 21:01:54.868565   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.868573   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:54.868578   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:54.868626   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:54.906554   60933 cri.go:89] found id: ""
	I1216 21:01:54.906587   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.906595   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:54.906604   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:54.906617   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:54.960664   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:54.960696   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:54.975657   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:54.975686   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:55.052266   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:55.052293   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:55.052320   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:55.137894   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:55.137937   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:57.682769   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:57.699102   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:57.699184   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:57.764651   60933 cri.go:89] found id: ""
	I1216 21:01:57.764684   60933 logs.go:282] 0 containers: []
	W1216 21:01:57.764692   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:57.764698   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:57.764755   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:57.805358   60933 cri.go:89] found id: ""
	I1216 21:01:57.805385   60933 logs.go:282] 0 containers: []
	W1216 21:01:57.805395   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:57.805404   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:57.805474   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:57.843589   60933 cri.go:89] found id: ""
	I1216 21:01:57.843623   60933 logs.go:282] 0 containers: []
	W1216 21:01:57.843634   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:57.843644   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:57.843716   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:57.881725   60933 cri.go:89] found id: ""
	I1216 21:01:57.881748   60933 logs.go:282] 0 containers: []
	W1216 21:01:57.881756   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:57.881761   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:57.881811   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:57.922252   60933 cri.go:89] found id: ""
	I1216 21:01:57.922293   60933 logs.go:282] 0 containers: []
	W1216 21:01:57.922305   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:57.922322   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:57.922385   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:57.962532   60933 cri.go:89] found id: ""
	I1216 21:01:57.962555   60933 logs.go:282] 0 containers: []
	W1216 21:01:57.962562   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:57.962567   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:57.962615   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:58.002021   60933 cri.go:89] found id: ""
	I1216 21:01:58.002056   60933 logs.go:282] 0 containers: []
	W1216 21:01:58.002067   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:58.002074   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:58.002137   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:58.035648   60933 cri.go:89] found id: ""
	I1216 21:01:58.035672   60933 logs.go:282] 0 containers: []
	W1216 21:01:58.035680   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:58.035688   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:58.035699   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:58.116142   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:58.116177   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:58.157683   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:58.157717   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:58.211686   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:58.211722   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:58.226385   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:58.226409   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:58.302287   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:54.937208   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:57.437396   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:59.438489   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:56.326836   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:58.818671   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:57.955980   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:00.455212   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:00.802544   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:00.816325   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:00.816405   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:00.853031   60933 cri.go:89] found id: ""
	I1216 21:02:00.853057   60933 logs.go:282] 0 containers: []
	W1216 21:02:00.853065   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:00.853070   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:00.853122   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:00.891040   60933 cri.go:89] found id: ""
	I1216 21:02:00.891071   60933 logs.go:282] 0 containers: []
	W1216 21:02:00.891082   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:00.891089   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:00.891151   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:00.929145   60933 cri.go:89] found id: ""
	I1216 21:02:00.929168   60933 logs.go:282] 0 containers: []
	W1216 21:02:00.929175   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:00.929181   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:00.929227   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:00.976469   60933 cri.go:89] found id: ""
	I1216 21:02:00.976492   60933 logs.go:282] 0 containers: []
	W1216 21:02:00.976500   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:00.976505   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:00.976553   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:01.015053   60933 cri.go:89] found id: ""
	I1216 21:02:01.015078   60933 logs.go:282] 0 containers: []
	W1216 21:02:01.015086   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:01.015092   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:01.015150   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:01.052859   60933 cri.go:89] found id: ""
	I1216 21:02:01.052891   60933 logs.go:282] 0 containers: []
	W1216 21:02:01.052902   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:01.052909   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:01.053028   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:01.091209   60933 cri.go:89] found id: ""
	I1216 21:02:01.091238   60933 logs.go:282] 0 containers: []
	W1216 21:02:01.091259   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:01.091266   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:01.091341   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:01.127013   60933 cri.go:89] found id: ""
	I1216 21:02:01.127038   60933 logs.go:282] 0 containers: []
	W1216 21:02:01.127047   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:01.127058   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:01.127072   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:01.179642   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:01.179697   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:01.196390   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:01.196416   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:01.275446   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:01.275478   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:01.275493   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:01.354391   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:01.354429   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:03.897672   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:03.911596   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:03.911654   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:03.955700   60933 cri.go:89] found id: ""
	I1216 21:02:03.955726   60933 logs.go:282] 0 containers: []
	W1216 21:02:03.955735   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:03.955741   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:03.955803   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:03.995661   60933 cri.go:89] found id: ""
	I1216 21:02:03.995696   60933 logs.go:282] 0 containers: []
	W1216 21:02:03.995706   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:03.995713   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:03.995772   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:04.031368   60933 cri.go:89] found id: ""
	I1216 21:02:04.031391   60933 logs.go:282] 0 containers: []
	W1216 21:02:04.031398   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:04.031406   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:04.031455   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:04.067633   60933 cri.go:89] found id: ""
	I1216 21:02:04.067659   60933 logs.go:282] 0 containers: []
	W1216 21:02:04.067666   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:04.067671   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:04.067719   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:04.105734   60933 cri.go:89] found id: ""
	I1216 21:02:04.105758   60933 logs.go:282] 0 containers: []
	W1216 21:02:04.105768   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:04.105773   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:04.105824   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:04.146542   60933 cri.go:89] found id: ""
	I1216 21:02:04.146564   60933 logs.go:282] 0 containers: []
	W1216 21:02:04.146571   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:04.146577   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:04.146623   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:04.184433   60933 cri.go:89] found id: ""
	I1216 21:02:04.184462   60933 logs.go:282] 0 containers: []
	W1216 21:02:04.184473   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:04.184480   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:04.184551   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:04.223077   60933 cri.go:89] found id: ""
	I1216 21:02:04.223106   60933 logs.go:282] 0 containers: []
	W1216 21:02:04.223117   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:04.223127   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:04.223140   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:04.279618   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:04.279656   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:04.295841   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:04.295865   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:04.372609   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:04.372632   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:04.372648   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:01.937175   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:03.937249   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:00.819801   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:02.820563   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:05.320087   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:02.955461   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:05.455023   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:07.456981   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:04.457597   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:04.457631   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:07.006004   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:07.020394   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:07.020537   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:07.064242   60933 cri.go:89] found id: ""
	I1216 21:02:07.064274   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.064283   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:07.064289   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:07.064337   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:07.108865   60933 cri.go:89] found id: ""
	I1216 21:02:07.108899   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.108910   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:07.108917   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:07.108985   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:07.149021   60933 cri.go:89] found id: ""
	I1216 21:02:07.149051   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.149060   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:07.149066   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:07.149120   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:07.187808   60933 cri.go:89] found id: ""
	I1216 21:02:07.187833   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.187843   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:07.187850   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:07.187912   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:07.228748   60933 cri.go:89] found id: ""
	I1216 21:02:07.228774   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.228785   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:07.228792   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:07.228853   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:07.267961   60933 cri.go:89] found id: ""
	I1216 21:02:07.267996   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.268012   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:07.268021   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:07.268099   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:07.312464   60933 cri.go:89] found id: ""
	I1216 21:02:07.312491   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.312498   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:07.312503   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:07.312554   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:07.351902   60933 cri.go:89] found id: ""
	I1216 21:02:07.351933   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.351946   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:07.351958   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:07.351974   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:07.405985   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:07.406050   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:07.420796   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:07.420842   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:07.506527   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:07.506559   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:07.506574   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:07.587965   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:07.588001   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:06.437434   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:08.937843   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:07.320229   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:09.819940   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:09.954900   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:11.955004   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:10.132876   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:10.146785   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:10.146858   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:10.189278   60933 cri.go:89] found id: ""
	I1216 21:02:10.189312   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.189324   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:10.189332   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:10.189402   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:10.228331   60933 cri.go:89] found id: ""
	I1216 21:02:10.228370   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.228378   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:10.228383   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:10.228436   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:10.266424   60933 cri.go:89] found id: ""
	I1216 21:02:10.266458   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.266470   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:10.266478   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:10.266542   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:10.305865   60933 cri.go:89] found id: ""
	I1216 21:02:10.305890   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.305902   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:10.305909   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:10.305968   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:10.344211   60933 cri.go:89] found id: ""
	I1216 21:02:10.344239   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.344247   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:10.344253   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:10.344314   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:10.381939   60933 cri.go:89] found id: ""
	I1216 21:02:10.381993   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.382004   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:10.382011   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:10.382076   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:10.418882   60933 cri.go:89] found id: ""
	I1216 21:02:10.418908   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.418915   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:10.418921   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:10.418972   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:10.458397   60933 cri.go:89] found id: ""
	I1216 21:02:10.458425   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.458434   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:10.458447   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:10.458462   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:10.472152   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:10.472180   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:10.545888   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:10.545913   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:10.545926   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:10.627223   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:10.627293   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:10.676606   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:10.676633   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:13.227283   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:13.242871   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:13.242954   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:13.280676   60933 cri.go:89] found id: ""
	I1216 21:02:13.280711   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.280723   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:13.280731   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:13.280786   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:13.321357   60933 cri.go:89] found id: ""
	I1216 21:02:13.321389   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.321400   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:13.321408   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:13.321474   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:13.359002   60933 cri.go:89] found id: ""
	I1216 21:02:13.359030   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.359042   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:13.359050   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:13.359116   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:13.395879   60933 cri.go:89] found id: ""
	I1216 21:02:13.395922   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.395941   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:13.395950   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:13.396017   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:13.436761   60933 cri.go:89] found id: ""
	I1216 21:02:13.436781   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.436788   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:13.436793   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:13.436852   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:13.478839   60933 cri.go:89] found id: ""
	I1216 21:02:13.478869   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.478877   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:13.478883   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:13.478947   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:13.520013   60933 cri.go:89] found id: ""
	I1216 21:02:13.520037   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.520044   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:13.520050   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:13.520124   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:13.556973   60933 cri.go:89] found id: ""
	I1216 21:02:13.557001   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.557013   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:13.557023   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:13.557039   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:13.613499   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:13.613537   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:13.628689   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:13.628724   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:13.706556   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:13.706576   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:13.706589   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:13.786379   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:13.786419   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:11.436179   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:13.436800   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:11.820109   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:13.820778   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:14.457666   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:16.955591   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:16.333578   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:16.347948   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:16.348020   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:16.386928   60933 cri.go:89] found id: ""
	I1216 21:02:16.386955   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.386963   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:16.386969   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:16.387033   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:16.425192   60933 cri.go:89] found id: ""
	I1216 21:02:16.425253   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.425265   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:16.425273   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:16.425355   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:16.465522   60933 cri.go:89] found id: ""
	I1216 21:02:16.465554   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.465565   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:16.465573   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:16.465638   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:16.504567   60933 cri.go:89] found id: ""
	I1216 21:02:16.504605   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.504616   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:16.504624   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:16.504694   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:16.541823   60933 cri.go:89] found id: ""
	I1216 21:02:16.541852   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.541864   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:16.541872   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:16.541942   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:16.580898   60933 cri.go:89] found id: ""
	I1216 21:02:16.580927   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.580938   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:16.580946   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:16.581003   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:16.626006   60933 cri.go:89] found id: ""
	I1216 21:02:16.626036   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.626046   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:16.626053   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:16.626109   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:16.662686   60933 cri.go:89] found id: ""
	I1216 21:02:16.662712   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.662719   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:16.662728   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:16.662740   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:16.717939   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:16.717978   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:16.733431   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:16.733466   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:16.807379   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:16.807409   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:16.807421   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:16.896455   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:16.896492   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:15.437791   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:17.935778   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:16.321167   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:18.819624   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:18.955621   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:20.956220   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:19.442959   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:19.458684   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:19.458749   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:19.499907   60933 cri.go:89] found id: ""
	I1216 21:02:19.499938   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.499947   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:19.499954   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:19.500002   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:19.538010   60933 cri.go:89] found id: ""
	I1216 21:02:19.538035   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.538043   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:19.538049   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:19.538148   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:19.577097   60933 cri.go:89] found id: ""
	I1216 21:02:19.577131   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.577139   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:19.577145   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:19.577196   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:19.617288   60933 cri.go:89] found id: ""
	I1216 21:02:19.617316   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.617326   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:19.617332   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:19.617392   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:19.658066   60933 cri.go:89] found id: ""
	I1216 21:02:19.658090   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.658097   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:19.658103   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:19.658153   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:19.696077   60933 cri.go:89] found id: ""
	I1216 21:02:19.696108   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.696121   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:19.696131   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:19.696189   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:19.737657   60933 cri.go:89] found id: ""
	I1216 21:02:19.737692   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.737704   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:19.737712   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:19.737776   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:19.778699   60933 cri.go:89] found id: ""
	I1216 21:02:19.778729   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.778738   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:19.778746   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:19.778757   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:19.841941   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:19.841979   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:19.857752   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:19.857788   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:19.935980   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:19.936004   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:19.936020   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:20.019999   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:20.020046   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:22.566398   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:22.580376   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:22.580472   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:22.620240   60933 cri.go:89] found id: ""
	I1216 21:02:22.620273   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.620284   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:22.620292   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:22.620355   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:22.656413   60933 cri.go:89] found id: ""
	I1216 21:02:22.656444   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.656455   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:22.656463   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:22.656531   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:22.690956   60933 cri.go:89] found id: ""
	I1216 21:02:22.690978   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.690986   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:22.690992   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:22.691040   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:22.734851   60933 cri.go:89] found id: ""
	I1216 21:02:22.734885   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.734895   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:22.734903   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:22.734969   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:22.774416   60933 cri.go:89] found id: ""
	I1216 21:02:22.774450   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.774461   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:22.774467   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:22.774535   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:22.811162   60933 cri.go:89] found id: ""
	I1216 21:02:22.811192   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.811204   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:22.811212   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:22.811296   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:22.851955   60933 cri.go:89] found id: ""
	I1216 21:02:22.851980   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.851987   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:22.851993   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:22.852051   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:22.888699   60933 cri.go:89] found id: ""
	I1216 21:02:22.888725   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.888736   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:22.888747   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:22.888769   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:22.944065   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:22.944100   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:22.960842   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:22.960872   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:23.036229   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:23.036251   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:23.036263   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:23.122493   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:23.122535   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:19.936687   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:21.937222   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:24.437190   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:20.820544   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:22.820771   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:25.319776   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:22.956523   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:25.456180   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:25.667995   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:25.682152   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:25.682222   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:25.719092   60933 cri.go:89] found id: ""
	I1216 21:02:25.719120   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.719130   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:25.719135   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:25.719190   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:25.757668   60933 cri.go:89] found id: ""
	I1216 21:02:25.757702   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.757712   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:25.757720   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:25.757791   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:25.809743   60933 cri.go:89] found id: ""
	I1216 21:02:25.809776   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.809787   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:25.809795   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:25.809857   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:25.849181   60933 cri.go:89] found id: ""
	I1216 21:02:25.849211   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.849222   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:25.849230   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:25.849295   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:25.891032   60933 cri.go:89] found id: ""
	I1216 21:02:25.891079   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.891091   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:25.891098   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:25.891169   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:25.930549   60933 cri.go:89] found id: ""
	I1216 21:02:25.930575   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.930583   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:25.930589   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:25.930639   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:25.971709   60933 cri.go:89] found id: ""
	I1216 21:02:25.971736   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.971744   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:25.971749   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:25.971797   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:26.007728   60933 cri.go:89] found id: ""
	I1216 21:02:26.007760   60933 logs.go:282] 0 containers: []
	W1216 21:02:26.007769   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:26.007778   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:26.007791   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:26.059710   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:26.059752   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:26.074596   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:26.074627   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:26.145892   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:26.145913   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:26.145924   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:26.225961   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:26.226000   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:28.772974   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:28.787001   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:28.787078   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:28.828176   60933 cri.go:89] found id: ""
	I1216 21:02:28.828206   60933 logs.go:282] 0 containers: []
	W1216 21:02:28.828214   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:28.828223   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:28.828292   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:28.872750   60933 cri.go:89] found id: ""
	I1216 21:02:28.872781   60933 logs.go:282] 0 containers: []
	W1216 21:02:28.872792   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:28.872798   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:28.872859   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:28.914844   60933 cri.go:89] found id: ""
	I1216 21:02:28.914871   60933 logs.go:282] 0 containers: []
	W1216 21:02:28.914879   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:28.914884   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:28.914934   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:28.953541   60933 cri.go:89] found id: ""
	I1216 21:02:28.953569   60933 logs.go:282] 0 containers: []
	W1216 21:02:28.953579   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:28.953587   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:28.953647   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:28.992768   60933 cri.go:89] found id: ""
	I1216 21:02:28.992797   60933 logs.go:282] 0 containers: []
	W1216 21:02:28.992808   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:28.992816   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:28.992882   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:29.030069   60933 cri.go:89] found id: ""
	I1216 21:02:29.030102   60933 logs.go:282] 0 containers: []
	W1216 21:02:29.030113   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:29.030121   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:29.030187   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:29.068629   60933 cri.go:89] found id: ""
	I1216 21:02:29.068658   60933 logs.go:282] 0 containers: []
	W1216 21:02:29.068666   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:29.068677   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:29.068726   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:29.103664   60933 cri.go:89] found id: ""
	I1216 21:02:29.103697   60933 logs.go:282] 0 containers: []
	W1216 21:02:29.103708   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:29.103719   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:29.103732   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:29.151225   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:29.151276   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:29.209448   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:29.209499   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:29.225232   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:29.225257   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:29.309812   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:29.309832   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:29.309846   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:26.937193   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:28.937302   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:27.320052   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:29.820220   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:27.956244   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:29.957111   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:32.456969   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:31.896263   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:31.912378   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:31.912455   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:31.950479   60933 cri.go:89] found id: ""
	I1216 21:02:31.950508   60933 logs.go:282] 0 containers: []
	W1216 21:02:31.950527   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:31.950535   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:31.950600   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:31.990479   60933 cri.go:89] found id: ""
	I1216 21:02:31.990504   60933 logs.go:282] 0 containers: []
	W1216 21:02:31.990515   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:31.990533   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:31.990599   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:32.032808   60933 cri.go:89] found id: ""
	I1216 21:02:32.032834   60933 logs.go:282] 0 containers: []
	W1216 21:02:32.032843   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:32.032853   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:32.032913   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:32.069719   60933 cri.go:89] found id: ""
	I1216 21:02:32.069748   60933 logs.go:282] 0 containers: []
	W1216 21:02:32.069759   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:32.069772   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:32.069830   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:32.106652   60933 cri.go:89] found id: ""
	I1216 21:02:32.106685   60933 logs.go:282] 0 containers: []
	W1216 21:02:32.106694   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:32.106701   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:32.106767   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:32.145921   60933 cri.go:89] found id: ""
	I1216 21:02:32.145949   60933 logs.go:282] 0 containers: []
	W1216 21:02:32.145957   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:32.145963   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:32.146014   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:32.206313   60933 cri.go:89] found id: ""
	I1216 21:02:32.206342   60933 logs.go:282] 0 containers: []
	W1216 21:02:32.206351   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:32.206356   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:32.206410   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:32.262757   60933 cri.go:89] found id: ""
	I1216 21:02:32.262794   60933 logs.go:282] 0 containers: []
	W1216 21:02:32.262806   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:32.262818   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:32.262832   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:32.320221   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:32.320251   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:32.375395   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:32.375437   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:32.391103   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:32.391137   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:32.474709   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:32.474741   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:32.474757   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:31.436689   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:33.436921   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:32.320631   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:34.819726   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:34.956369   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:37.455577   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:35.058809   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:35.073074   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:35.073157   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:35.115280   60933 cri.go:89] found id: ""
	I1216 21:02:35.115305   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.115312   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:35.115318   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:35.115378   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:35.151561   60933 cri.go:89] found id: ""
	I1216 21:02:35.151589   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.151597   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:35.151603   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:35.151654   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:35.192061   60933 cri.go:89] found id: ""
	I1216 21:02:35.192088   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.192095   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:35.192111   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:35.192161   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:35.231493   60933 cri.go:89] found id: ""
	I1216 21:02:35.231523   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.231531   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:35.231538   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:35.231586   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:35.271236   60933 cri.go:89] found id: ""
	I1216 21:02:35.271291   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.271300   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:35.271306   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:35.271368   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:35.309950   60933 cri.go:89] found id: ""
	I1216 21:02:35.309980   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.309991   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:35.309999   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:35.310062   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:35.347762   60933 cri.go:89] found id: ""
	I1216 21:02:35.347790   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.347797   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:35.347803   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:35.347851   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:35.390732   60933 cri.go:89] found id: ""
	I1216 21:02:35.390757   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.390765   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:35.390774   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:35.390785   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:35.447068   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:35.447112   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:35.462873   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:35.462904   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:35.541120   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:35.541145   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:35.541162   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:35.627073   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:35.627120   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:38.170994   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:38.194371   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:38.194434   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:38.248023   60933 cri.go:89] found id: ""
	I1216 21:02:38.248050   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.248061   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:38.248069   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:38.248147   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:38.300143   60933 cri.go:89] found id: ""
	I1216 21:02:38.300175   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.300185   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:38.300193   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:38.300253   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:38.345273   60933 cri.go:89] found id: ""
	I1216 21:02:38.345300   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.345308   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:38.345314   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:38.345389   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:38.383032   60933 cri.go:89] found id: ""
	I1216 21:02:38.383066   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.383075   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:38.383081   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:38.383135   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:38.426042   60933 cri.go:89] found id: ""
	I1216 21:02:38.426074   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.426086   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:38.426094   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:38.426159   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:38.467596   60933 cri.go:89] found id: ""
	I1216 21:02:38.467625   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.467634   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:38.467640   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:38.467692   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:38.509340   60933 cri.go:89] found id: ""
	I1216 21:02:38.509380   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.509391   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:38.509399   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:38.509470   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:38.549306   60933 cri.go:89] found id: ""
	I1216 21:02:38.549337   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.549354   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:38.549365   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:38.549381   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:38.564091   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:38.564131   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:38.639173   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:38.639201   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:38.639219   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:38.716320   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:38.716376   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:38.756779   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:38.756815   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:35.437230   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:37.938595   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:36.820302   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:39.319712   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:39.954558   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:41.955761   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:41.310680   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:41.327606   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:41.327684   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:41.371622   60933 cri.go:89] found id: ""
	I1216 21:02:41.371657   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.371670   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:41.371679   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:41.371739   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:41.408149   60933 cri.go:89] found id: ""
	I1216 21:02:41.408187   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.408198   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:41.408203   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:41.408252   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:41.448445   60933 cri.go:89] found id: ""
	I1216 21:02:41.448471   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.448478   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:41.448484   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:41.448533   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:41.489957   60933 cri.go:89] found id: ""
	I1216 21:02:41.489989   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.490000   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:41.490007   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:41.490069   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:41.532891   60933 cri.go:89] found id: ""
	I1216 21:02:41.532918   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.532930   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:41.532937   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:41.532992   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:41.570315   60933 cri.go:89] found id: ""
	I1216 21:02:41.570342   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.570351   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:41.570357   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:41.570455   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:41.606833   60933 cri.go:89] found id: ""
	I1216 21:02:41.606867   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.606880   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:41.606890   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:41.606959   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:41.643862   60933 cri.go:89] found id: ""
	I1216 21:02:41.643886   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.643894   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:41.643902   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:41.643914   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:41.657621   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:41.657654   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:41.732256   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:41.732281   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:41.732295   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:41.822045   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:41.822081   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:41.863900   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:41.863933   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:40.436149   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:42.436247   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:44.436916   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:41.321155   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:43.819721   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:43.956057   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:46.455802   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:44.425154   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:44.440148   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:44.440223   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:44.478216   60933 cri.go:89] found id: ""
	I1216 21:02:44.478247   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.478258   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:44.478266   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:44.478329   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:44.517054   60933 cri.go:89] found id: ""
	I1216 21:02:44.517078   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.517084   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:44.517090   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:44.517137   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:44.554683   60933 cri.go:89] found id: ""
	I1216 21:02:44.554778   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.554801   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:44.554845   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:44.554927   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:44.600748   60933 cri.go:89] found id: ""
	I1216 21:02:44.600788   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.600800   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:44.600809   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:44.600863   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:44.637564   60933 cri.go:89] found id: ""
	I1216 21:02:44.637592   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.637600   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:44.637606   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:44.637656   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:44.676619   60933 cri.go:89] found id: ""
	I1216 21:02:44.676662   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.676674   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:44.676683   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:44.676755   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:44.715920   60933 cri.go:89] found id: ""
	I1216 21:02:44.715956   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.715964   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:44.715970   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:44.716027   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:44.755134   60933 cri.go:89] found id: ""
	I1216 21:02:44.755167   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.755179   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:44.755191   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:44.755202   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:44.796135   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:44.796164   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:44.850550   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:44.850593   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:44.865278   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:44.865305   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:44.942987   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:44.943013   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:44.943026   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:47.529850   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:47.546292   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:47.546369   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:47.589597   60933 cri.go:89] found id: ""
	I1216 21:02:47.589627   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.589640   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:47.589648   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:47.589713   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:47.630998   60933 cri.go:89] found id: ""
	I1216 21:02:47.631030   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.631043   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:47.631051   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:47.631118   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:47.670118   60933 cri.go:89] found id: ""
	I1216 21:02:47.670150   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.670162   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:47.670169   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:47.670233   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:47.714516   60933 cri.go:89] found id: ""
	I1216 21:02:47.714549   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.714560   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:47.714568   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:47.714631   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:47.752042   60933 cri.go:89] found id: ""
	I1216 21:02:47.752074   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.752086   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:47.752093   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:47.752158   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:47.793612   60933 cri.go:89] found id: ""
	I1216 21:02:47.793645   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.793656   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:47.793664   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:47.793734   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:47.833489   60933 cri.go:89] found id: ""
	I1216 21:02:47.833518   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.833529   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:47.833541   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:47.833602   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:47.869744   60933 cri.go:89] found id: ""
	I1216 21:02:47.869772   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.869783   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:47.869793   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:47.869809   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:47.910640   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:47.910674   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:47.965747   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:47.965781   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:47.979760   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:47.979786   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:48.056887   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:48.056917   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:48.056933   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:46.439409   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:48.937248   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:46.320935   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:48.820700   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:48.955697   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:50.955859   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:50.641224   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:50.657267   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:50.657346   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:50.696890   60933 cri.go:89] found id: ""
	I1216 21:02:50.696916   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.696924   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:50.696930   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:50.696993   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:50.734485   60933 cri.go:89] found id: ""
	I1216 21:02:50.734514   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.734524   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:50.734533   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:50.734598   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:50.776241   60933 cri.go:89] found id: ""
	I1216 21:02:50.776268   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.776277   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:50.776283   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:50.776358   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:50.816449   60933 cri.go:89] found id: ""
	I1216 21:02:50.816482   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.816493   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:50.816501   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:50.816561   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:50.857458   60933 cri.go:89] found id: ""
	I1216 21:02:50.857481   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.857488   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:50.857494   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:50.857556   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:50.895367   60933 cri.go:89] found id: ""
	I1216 21:02:50.895391   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.895398   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:50.895404   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:50.895466   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:50.934101   60933 cri.go:89] found id: ""
	I1216 21:02:50.934128   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.934138   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:50.934152   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:50.934212   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:50.978625   60933 cri.go:89] found id: ""
	I1216 21:02:50.978654   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.978665   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:50.978675   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:50.978688   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:51.061867   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:51.061908   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:51.101188   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:51.101228   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:51.157426   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:51.157470   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:51.172835   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:51.172882   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:51.247678   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:53.748503   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:53.763357   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:53.763425   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:53.807963   60933 cri.go:89] found id: ""
	I1216 21:02:53.807990   60933 logs.go:282] 0 containers: []
	W1216 21:02:53.807999   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:53.808005   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:53.808063   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:53.846840   60933 cri.go:89] found id: ""
	I1216 21:02:53.846867   60933 logs.go:282] 0 containers: []
	W1216 21:02:53.846876   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:53.846881   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:53.846929   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:53.885099   60933 cri.go:89] found id: ""
	I1216 21:02:53.885131   60933 logs.go:282] 0 containers: []
	W1216 21:02:53.885146   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:53.885156   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:53.885226   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:53.923859   60933 cri.go:89] found id: ""
	I1216 21:02:53.923890   60933 logs.go:282] 0 containers: []
	W1216 21:02:53.923901   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:53.923908   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:53.923972   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:53.964150   60933 cri.go:89] found id: ""
	I1216 21:02:53.964176   60933 logs.go:282] 0 containers: []
	W1216 21:02:53.964186   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:53.964201   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:53.964265   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:54.004676   60933 cri.go:89] found id: ""
	I1216 21:02:54.004707   60933 logs.go:282] 0 containers: []
	W1216 21:02:54.004718   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:54.004725   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:54.004789   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:54.042560   60933 cri.go:89] found id: ""
	I1216 21:02:54.042585   60933 logs.go:282] 0 containers: []
	W1216 21:02:54.042595   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:54.042603   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:54.042666   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:54.081002   60933 cri.go:89] found id: ""
	I1216 21:02:54.081030   60933 logs.go:282] 0 containers: []
	W1216 21:02:54.081038   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:54.081046   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:54.081058   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:54.132825   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:54.132865   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:54.147793   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:54.147821   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:54.226668   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:54.226692   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:54.226704   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:54.307792   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:54.307832   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:50.938230   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:53.436746   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:50.820949   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:53.320283   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:52.957187   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:54.958212   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:57.456612   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:56.852207   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:56.866404   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:56.866469   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:56.911786   60933 cri.go:89] found id: ""
	I1216 21:02:56.911811   60933 logs.go:282] 0 containers: []
	W1216 21:02:56.911820   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:56.911829   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:56.911886   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:56.953491   60933 cri.go:89] found id: ""
	I1216 21:02:56.953520   60933 logs.go:282] 0 containers: []
	W1216 21:02:56.953535   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:56.953543   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:56.953610   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:56.991569   60933 cri.go:89] found id: ""
	I1216 21:02:56.991605   60933 logs.go:282] 0 containers: []
	W1216 21:02:56.991616   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:56.991622   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:56.991685   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:57.026808   60933 cri.go:89] found id: ""
	I1216 21:02:57.026837   60933 logs.go:282] 0 containers: []
	W1216 21:02:57.026845   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:57.026851   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:57.026913   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:57.065539   60933 cri.go:89] found id: ""
	I1216 21:02:57.065569   60933 logs.go:282] 0 containers: []
	W1216 21:02:57.065577   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:57.065583   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:57.065642   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:57.103911   60933 cri.go:89] found id: ""
	I1216 21:02:57.103942   60933 logs.go:282] 0 containers: []
	W1216 21:02:57.103952   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:57.103960   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:57.104015   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:57.141177   60933 cri.go:89] found id: ""
	I1216 21:02:57.141200   60933 logs.go:282] 0 containers: []
	W1216 21:02:57.141207   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:57.141213   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:57.141262   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:57.178532   60933 cri.go:89] found id: ""
	I1216 21:02:57.178590   60933 logs.go:282] 0 containers: []
	W1216 21:02:57.178604   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:57.178614   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:57.178629   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:57.234811   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:57.234846   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:57.251540   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:57.251569   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:57.329029   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:57.329061   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:57.329077   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:57.412624   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:57.412665   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:55.436981   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:57.438061   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:55.819607   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:57.819648   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:59.820705   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:59.955043   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:01.956284   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:59.960422   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:59.974889   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:59.974966   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:00.012641   60933 cri.go:89] found id: ""
	I1216 21:03:00.012669   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.012676   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:00.012682   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:00.012730   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:00.053730   60933 cri.go:89] found id: ""
	I1216 21:03:00.053766   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.053778   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:00.053785   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:00.053847   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:00.091213   60933 cri.go:89] found id: ""
	I1216 21:03:00.091261   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.091274   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:00.091283   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:00.091357   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:00.131357   60933 cri.go:89] found id: ""
	I1216 21:03:00.131382   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.131390   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:00.131396   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:00.131460   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:00.168331   60933 cri.go:89] found id: ""
	I1216 21:03:00.168362   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.168373   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:00.168380   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:00.168446   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:00.208326   60933 cri.go:89] found id: ""
	I1216 21:03:00.208360   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.208369   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:00.208377   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:00.208440   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:00.245775   60933 cri.go:89] found id: ""
	I1216 21:03:00.245800   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.245808   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:00.245814   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:00.245863   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:00.283062   60933 cri.go:89] found id: ""
	I1216 21:03:00.283091   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.283100   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:00.283108   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:00.283119   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:00.358767   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:00.358787   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:00.358799   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:00.443422   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:00.443460   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:00.491511   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:00.491551   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:00.566131   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:00.566172   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:03.080319   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:03.094733   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:03.094818   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:03.132388   60933 cri.go:89] found id: ""
	I1216 21:03:03.132419   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.132428   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:03.132433   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:03.132488   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:03.172345   60933 cri.go:89] found id: ""
	I1216 21:03:03.172374   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.172386   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:03.172393   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:03.172474   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:03.210444   60933 cri.go:89] found id: ""
	I1216 21:03:03.210479   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.210488   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:03.210494   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:03.210544   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:03.248605   60933 cri.go:89] found id: ""
	I1216 21:03:03.248644   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.248656   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:03.248664   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:03.248723   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:03.286822   60933 cri.go:89] found id: ""
	I1216 21:03:03.286854   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.286862   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:03.286868   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:03.286921   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:03.329304   60933 cri.go:89] found id: ""
	I1216 21:03:03.329333   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.329344   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:03.329352   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:03.329417   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:03.367337   60933 cri.go:89] found id: ""
	I1216 21:03:03.367361   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.367368   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:03.367373   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:03.367420   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:03.409799   60933 cri.go:89] found id: ""
	I1216 21:03:03.409821   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.409829   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:03.409838   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:03.409850   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:03.466941   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:03.466976   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:03.483090   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:03.483117   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:03.566835   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:03.566860   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:03.566878   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:03.649747   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:03.649793   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:59.936221   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:01.936251   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:03.936714   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:02.319063   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:04.319653   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:03.956397   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:05.956531   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:06.193505   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:06.207797   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:06.207878   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:06.245401   60933 cri.go:89] found id: ""
	I1216 21:03:06.245437   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.245448   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:06.245456   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:06.245521   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:06.301205   60933 cri.go:89] found id: ""
	I1216 21:03:06.301239   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.301250   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:06.301257   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:06.301326   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:06.340325   60933 cri.go:89] found id: ""
	I1216 21:03:06.340352   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.340362   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:06.340369   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:06.340429   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:06.378321   60933 cri.go:89] found id: ""
	I1216 21:03:06.378351   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.378359   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:06.378365   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:06.378422   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:06.416354   60933 cri.go:89] found id: ""
	I1216 21:03:06.416390   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.416401   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:06.416409   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:06.416473   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:06.459926   60933 cri.go:89] found id: ""
	I1216 21:03:06.459955   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.459967   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:06.459975   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:06.460063   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:06.501818   60933 cri.go:89] found id: ""
	I1216 21:03:06.501849   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.501860   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:06.501866   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:06.501926   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:06.537552   60933 cri.go:89] found id: ""
	I1216 21:03:06.537583   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.537598   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:06.537607   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:06.537621   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:06.592170   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:06.592212   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:06.607148   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:06.607183   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:06.676114   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:06.676140   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:06.676151   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:06.756009   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:06.756052   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:09.298166   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:09.313104   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:09.313189   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:09.356598   60933 cri.go:89] found id: ""
	I1216 21:03:09.356625   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.356640   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:09.356649   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:09.356715   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:05.937241   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:07.938858   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:06.322260   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:08.818974   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:08.455838   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:10.955332   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:09.395406   60933 cri.go:89] found id: ""
	I1216 21:03:09.395439   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.395449   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:09.395456   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:09.395521   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:09.440401   60933 cri.go:89] found id: ""
	I1216 21:03:09.440423   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.440430   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:09.440435   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:09.440504   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:09.478798   60933 cri.go:89] found id: ""
	I1216 21:03:09.478828   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.478843   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:09.478853   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:09.478921   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:09.515542   60933 cri.go:89] found id: ""
	I1216 21:03:09.515575   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.515587   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:09.515596   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:09.515654   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:09.554150   60933 cri.go:89] found id: ""
	I1216 21:03:09.554183   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.554194   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:09.554205   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:09.554279   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:09.591699   60933 cri.go:89] found id: ""
	I1216 21:03:09.591730   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.591740   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:09.591747   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:09.591811   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:09.629938   60933 cri.go:89] found id: ""
	I1216 21:03:09.629970   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.629980   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:09.629991   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:09.630008   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:09.711255   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:09.711284   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:09.711300   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:09.790202   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:09.790243   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:09.839567   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:09.839597   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:09.893010   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:09.893050   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:12.409934   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:12.423715   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:12.423789   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:12.461995   60933 cri.go:89] found id: ""
	I1216 21:03:12.462038   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.462046   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:12.462052   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:12.462101   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:12.501738   60933 cri.go:89] found id: ""
	I1216 21:03:12.501769   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.501779   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:12.501785   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:12.501833   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:12.541758   60933 cri.go:89] found id: ""
	I1216 21:03:12.541785   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.541795   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:12.541802   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:12.541850   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:12.579173   60933 cri.go:89] found id: ""
	I1216 21:03:12.579199   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.579206   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:12.579212   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:12.579302   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:12.624382   60933 cri.go:89] found id: ""
	I1216 21:03:12.624407   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.624418   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:12.624426   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:12.624488   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:12.665139   60933 cri.go:89] found id: ""
	I1216 21:03:12.665178   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.665190   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:12.665200   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:12.665274   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:12.711586   60933 cri.go:89] found id: ""
	I1216 21:03:12.711611   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.711619   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:12.711627   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:12.711678   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:12.761566   60933 cri.go:89] found id: ""
	I1216 21:03:12.761600   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.761612   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:12.761624   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:12.761640   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:12.824282   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:12.824315   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:12.839335   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:12.839371   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:12.918317   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:12.918341   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:12.918357   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:13.000375   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:13.000410   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:10.438136   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:12.936742   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:11.319284   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:13.320036   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:15.322965   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:12.955450   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:14.956186   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:16.956603   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:15.542372   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:15.556877   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:15.556960   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:15.599345   60933 cri.go:89] found id: ""
	I1216 21:03:15.599378   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.599389   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:15.599414   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:15.599479   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:15.642072   60933 cri.go:89] found id: ""
	I1216 21:03:15.642106   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.642116   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:15.642124   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:15.642189   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:15.679989   60933 cri.go:89] found id: ""
	I1216 21:03:15.680025   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.680036   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:15.680044   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:15.680103   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:15.718343   60933 cri.go:89] found id: ""
	I1216 21:03:15.718371   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.718378   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:15.718384   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:15.718433   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:15.759937   60933 cri.go:89] found id: ""
	I1216 21:03:15.759971   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.759981   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:15.759988   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:15.760081   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:15.801434   60933 cri.go:89] found id: ""
	I1216 21:03:15.801463   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.801471   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:15.801477   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:15.801540   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:15.841855   60933 cri.go:89] found id: ""
	I1216 21:03:15.841879   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.841886   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:15.841892   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:15.841962   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:15.883951   60933 cri.go:89] found id: ""
	I1216 21:03:15.883974   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.883982   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:15.883990   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:15.884004   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:15.960868   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:15.960902   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:16.005700   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:16.005730   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:16.061128   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:16.061165   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:16.075601   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:16.075630   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:16.147810   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:18.648677   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:18.663298   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:18.663367   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:18.713281   60933 cri.go:89] found id: ""
	I1216 21:03:18.713313   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.713324   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:18.713332   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:18.713396   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:18.764861   60933 cri.go:89] found id: ""
	I1216 21:03:18.764892   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.764905   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:18.764912   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:18.764978   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:18.816140   60933 cri.go:89] found id: ""
	I1216 21:03:18.816170   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.816180   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:18.816188   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:18.816251   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:18.852118   60933 cri.go:89] found id: ""
	I1216 21:03:18.852151   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.852163   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:18.852171   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:18.852235   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:18.887996   60933 cri.go:89] found id: ""
	I1216 21:03:18.888018   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.888025   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:18.888031   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:18.888089   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:18.925415   60933 cri.go:89] found id: ""
	I1216 21:03:18.925437   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.925445   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:18.925451   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:18.925498   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:18.964853   60933 cri.go:89] found id: ""
	I1216 21:03:18.964884   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.964892   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:18.964897   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:18.964964   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:19.000822   60933 cri.go:89] found id: ""
	I1216 21:03:19.000848   60933 logs.go:282] 0 containers: []
	W1216 21:03:19.000856   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:19.000865   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:19.000879   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:19.051571   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:19.051612   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:19.066737   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:19.066767   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:19.143120   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:19.143144   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:19.143156   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:19.229811   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:19.229850   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:15.437189   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:17.439345   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:17.820374   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:19.820460   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:19.455707   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:21.955275   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:21.776440   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:21.792869   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:21.792951   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:21.831100   60933 cri.go:89] found id: ""
	I1216 21:03:21.831127   60933 logs.go:282] 0 containers: []
	W1216 21:03:21.831134   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:21.831140   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:21.831196   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:21.869124   60933 cri.go:89] found id: ""
	I1216 21:03:21.869147   60933 logs.go:282] 0 containers: []
	W1216 21:03:21.869155   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:21.869160   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:21.869215   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:21.909891   60933 cri.go:89] found id: ""
	I1216 21:03:21.909926   60933 logs.go:282] 0 containers: []
	W1216 21:03:21.909938   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:21.909946   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:21.910032   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:21.949140   60933 cri.go:89] found id: ""
	I1216 21:03:21.949169   60933 logs.go:282] 0 containers: []
	W1216 21:03:21.949179   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:21.949186   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:21.949245   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:21.987741   60933 cri.go:89] found id: ""
	I1216 21:03:21.987771   60933 logs.go:282] 0 containers: []
	W1216 21:03:21.987780   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:21.987785   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:21.987839   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:22.025565   60933 cri.go:89] found id: ""
	I1216 21:03:22.025593   60933 logs.go:282] 0 containers: []
	W1216 21:03:22.025601   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:22.025607   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:22.025659   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:22.062076   60933 cri.go:89] found id: ""
	I1216 21:03:22.062110   60933 logs.go:282] 0 containers: []
	W1216 21:03:22.062120   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:22.062127   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:22.062198   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:22.102037   60933 cri.go:89] found id: ""
	I1216 21:03:22.102065   60933 logs.go:282] 0 containers: []
	W1216 21:03:22.102093   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:22.102105   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:22.102122   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:22.159185   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:22.159219   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:22.175139   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:22.175168   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:22.255769   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:22.255801   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:22.255817   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:22.339633   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:22.339681   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:19.937328   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:22.435709   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:24.436704   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:22.319227   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:24.819278   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:24.455668   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:26.956382   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:24.883865   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:24.898198   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:24.898287   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:24.939472   60933 cri.go:89] found id: ""
	I1216 21:03:24.939500   60933 logs.go:282] 0 containers: []
	W1216 21:03:24.939511   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:24.939518   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:24.939583   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:24.981798   60933 cri.go:89] found id: ""
	I1216 21:03:24.981822   60933 logs.go:282] 0 containers: []
	W1216 21:03:24.981829   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:24.981834   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:24.981889   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:25.021332   60933 cri.go:89] found id: ""
	I1216 21:03:25.021366   60933 logs.go:282] 0 containers: []
	W1216 21:03:25.021373   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:25.021379   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:25.021431   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:25.057811   60933 cri.go:89] found id: ""
	I1216 21:03:25.057836   60933 logs.go:282] 0 containers: []
	W1216 21:03:25.057843   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:25.057848   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:25.057907   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:25.093852   60933 cri.go:89] found id: ""
	I1216 21:03:25.093881   60933 logs.go:282] 0 containers: []
	W1216 21:03:25.093890   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:25.093895   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:25.093945   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:25.132779   60933 cri.go:89] found id: ""
	I1216 21:03:25.132813   60933 logs.go:282] 0 containers: []
	W1216 21:03:25.132825   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:25.132834   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:25.132912   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:25.173942   60933 cri.go:89] found id: ""
	I1216 21:03:25.173967   60933 logs.go:282] 0 containers: []
	W1216 21:03:25.173974   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:25.173990   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:25.174048   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:25.213105   60933 cri.go:89] found id: ""
	I1216 21:03:25.213127   60933 logs.go:282] 0 containers: []
	W1216 21:03:25.213135   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:25.213144   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:25.213155   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:25.267517   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:25.267557   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:25.284144   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:25.284177   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:25.362901   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:25.362931   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:25.362947   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:25.450193   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:25.450227   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:27.995716   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:28.012044   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:28.012138   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:28.050404   60933 cri.go:89] found id: ""
	I1216 21:03:28.050432   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.050441   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:28.050446   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:28.050492   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:28.087830   60933 cri.go:89] found id: ""
	I1216 21:03:28.087855   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.087862   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:28.087885   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:28.087933   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:28.125122   60933 cri.go:89] found id: ""
	I1216 21:03:28.125147   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.125154   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:28.125160   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:28.125233   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:28.160619   60933 cri.go:89] found id: ""
	I1216 21:03:28.160646   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.160655   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:28.160661   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:28.160726   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:28.198951   60933 cri.go:89] found id: ""
	I1216 21:03:28.198977   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.198986   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:28.198993   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:28.199059   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:28.236596   60933 cri.go:89] found id: ""
	I1216 21:03:28.236621   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.236629   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:28.236635   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:28.236707   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:28.273955   60933 cri.go:89] found id: ""
	I1216 21:03:28.273979   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.273986   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:28.273992   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:28.274061   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:28.311908   60933 cri.go:89] found id: ""
	I1216 21:03:28.311943   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.311954   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:28.311965   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:28.311979   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:28.363870   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:28.363910   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:28.379919   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:28.379945   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:28.459998   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:28.460019   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:28.460030   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:28.543229   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:28.543306   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:26.936661   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:29.437169   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:26.820563   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:29.319981   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:28.956791   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:31.456708   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:31.086525   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:31.100833   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:31.100950   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:31.141356   60933 cri.go:89] found id: ""
	I1216 21:03:31.141385   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.141396   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:31.141403   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:31.141465   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:31.176609   60933 cri.go:89] found id: ""
	I1216 21:03:31.176641   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.176650   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:31.176657   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:31.176721   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:31.213959   60933 cri.go:89] found id: ""
	I1216 21:03:31.213984   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.213991   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:31.213997   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:31.214058   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:31.255183   60933 cri.go:89] found id: ""
	I1216 21:03:31.255208   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.255215   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:31.255220   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:31.255297   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:31.293475   60933 cri.go:89] found id: ""
	I1216 21:03:31.293501   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.293508   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:31.293514   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:31.293561   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:31.332010   60933 cri.go:89] found id: ""
	I1216 21:03:31.332041   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.332052   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:31.332061   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:31.332119   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:31.370301   60933 cri.go:89] found id: ""
	I1216 21:03:31.370331   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.370342   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:31.370349   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:31.370414   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:31.419526   60933 cri.go:89] found id: ""
	I1216 21:03:31.419553   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.419561   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:31.419570   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:31.419583   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:31.480125   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:31.480160   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:31.495464   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:31.495497   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:31.570747   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:31.570773   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:31.570788   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:31.651521   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:31.651564   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:34.200969   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:34.216519   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:34.216596   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:34.254185   60933 cri.go:89] found id: ""
	I1216 21:03:34.254218   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.254227   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:34.254242   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:34.254312   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:34.293194   60933 cri.go:89] found id: ""
	I1216 21:03:34.293225   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.293236   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:34.293242   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:34.293297   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:34.335002   60933 cri.go:89] found id: ""
	I1216 21:03:34.335030   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.335042   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:34.335050   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:34.335112   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:34.370854   60933 cri.go:89] found id: ""
	I1216 21:03:34.370880   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.370887   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:34.370893   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:34.370938   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:31.439597   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:33.935941   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:31.820337   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:33.820497   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:33.955185   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:36.455713   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:34.409155   60933 cri.go:89] found id: ""
	I1216 21:03:34.409181   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.409189   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:34.409195   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:34.409256   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:34.448555   60933 cri.go:89] found id: ""
	I1216 21:03:34.448583   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.448594   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:34.448601   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:34.448663   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:34.486800   60933 cri.go:89] found id: ""
	I1216 21:03:34.486829   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.486842   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:34.486851   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:34.486919   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:34.530274   60933 cri.go:89] found id: ""
	I1216 21:03:34.530299   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.530307   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:34.530317   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:34.530335   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:34.601587   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:34.601620   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:34.601637   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:34.680215   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:34.680250   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:34.721362   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:34.721389   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:34.776652   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:34.776693   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:37.292877   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:37.306976   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:37.307060   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:37.349370   60933 cri.go:89] found id: ""
	I1216 21:03:37.349405   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.349416   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:37.349424   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:37.349486   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:37.387213   60933 cri.go:89] found id: ""
	I1216 21:03:37.387271   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.387285   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:37.387294   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:37.387361   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:37.427138   60933 cri.go:89] found id: ""
	I1216 21:03:37.427164   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.427175   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:37.427182   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:37.427269   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:37.466751   60933 cri.go:89] found id: ""
	I1216 21:03:37.466776   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.466783   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:37.466788   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:37.466846   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:37.505078   60933 cri.go:89] found id: ""
	I1216 21:03:37.505115   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.505123   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:37.505128   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:37.505189   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:37.548642   60933 cri.go:89] found id: ""
	I1216 21:03:37.548665   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.548673   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:37.548679   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:37.548738   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:37.592354   60933 cri.go:89] found id: ""
	I1216 21:03:37.592379   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.592386   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:37.592391   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:37.592441   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:37.631179   60933 cri.go:89] found id: ""
	I1216 21:03:37.631212   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.631221   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:37.631230   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:37.631261   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:37.683021   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:37.683062   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:37.698056   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:37.698087   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:37.774368   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:37.774397   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:37.774422   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:37.860470   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:37.860511   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:35.936409   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:37.936652   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:36.319436   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:38.819727   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:38.456251   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:40.957354   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:40.405278   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:40.420390   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:40.420473   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:40.463963   60933 cri.go:89] found id: ""
	I1216 21:03:40.463994   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.464033   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:40.464041   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:40.464107   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:40.510321   60933 cri.go:89] found id: ""
	I1216 21:03:40.510352   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.510369   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:40.510376   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:40.510441   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:40.546580   60933 cri.go:89] found id: ""
	I1216 21:03:40.546610   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.546619   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:40.546624   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:40.546686   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:40.583109   60933 cri.go:89] found id: ""
	I1216 21:03:40.583136   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.583144   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:40.583149   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:40.583202   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:40.628747   60933 cri.go:89] found id: ""
	I1216 21:03:40.628771   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.628778   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:40.628784   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:40.628845   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:40.663757   60933 cri.go:89] found id: ""
	I1216 21:03:40.663785   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.663796   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:40.663804   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:40.663867   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:40.703483   60933 cri.go:89] found id: ""
	I1216 21:03:40.703513   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.703522   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:40.703528   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:40.703592   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:40.742585   60933 cri.go:89] found id: ""
	I1216 21:03:40.742622   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.742632   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:40.742641   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:40.742653   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:40.757771   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:40.757809   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:40.837615   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:40.837642   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:40.837656   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:40.915403   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:40.915442   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:40.960762   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:40.960790   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:43.515302   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:43.530831   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:43.530906   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:43.571680   60933 cri.go:89] found id: ""
	I1216 21:03:43.571704   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.571712   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:43.571718   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:43.571779   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:43.615912   60933 cri.go:89] found id: ""
	I1216 21:03:43.615940   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.615948   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:43.615955   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:43.616013   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:43.654206   60933 cri.go:89] found id: ""
	I1216 21:03:43.654231   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.654241   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:43.654249   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:43.654309   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:43.690509   60933 cri.go:89] found id: ""
	I1216 21:03:43.690533   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.690541   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:43.690548   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:43.690595   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:43.728601   60933 cri.go:89] found id: ""
	I1216 21:03:43.728627   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.728634   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:43.728639   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:43.728685   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:43.769092   60933 cri.go:89] found id: ""
	I1216 21:03:43.769130   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.769198   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:43.769215   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:43.769292   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:43.812492   60933 cri.go:89] found id: ""
	I1216 21:03:43.812525   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.812537   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:43.812544   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:43.812613   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:43.852748   60933 cri.go:89] found id: ""
	I1216 21:03:43.852778   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.852787   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:43.852795   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:43.852807   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:43.907800   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:43.907839   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:43.922806   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:43.922833   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:44.002511   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:44.002538   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:44.002551   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:44.081760   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:44.081801   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:40.437134   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:42.437214   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:40.820244   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:43.321298   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:43.455891   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:45.456281   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:46.625868   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:46.640266   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:46.640341   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:46.677137   60933 cri.go:89] found id: ""
	I1216 21:03:46.677168   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.677179   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:46.677185   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:46.677241   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:46.714340   60933 cri.go:89] found id: ""
	I1216 21:03:46.714373   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.714382   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:46.714389   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:46.714449   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:46.752713   60933 cri.go:89] found id: ""
	I1216 21:03:46.752743   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.752754   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:46.752763   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:46.752827   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:46.790787   60933 cri.go:89] found id: ""
	I1216 21:03:46.790821   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.790837   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:46.790845   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:46.790902   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:46.827905   60933 cri.go:89] found id: ""
	I1216 21:03:46.827934   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.827945   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:46.827954   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:46.828023   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:46.863522   60933 cri.go:89] found id: ""
	I1216 21:03:46.863547   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.863563   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:46.863570   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:46.863634   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:46.906005   60933 cri.go:89] found id: ""
	I1216 21:03:46.906035   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.906044   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:46.906049   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:46.906103   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:46.947639   60933 cri.go:89] found id: ""
	I1216 21:03:46.947668   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.947679   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:46.947691   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:46.947706   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:47.001693   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:47.001732   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:47.023122   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:47.023166   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:47.108257   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:47.108291   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:47.108303   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:47.184768   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:47.184807   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:44.940074   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:47.437155   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:45.819943   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:47.820443   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:49.820700   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:47.955794   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:49.960595   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:52.455630   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:49.729433   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:49.743836   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:49.743903   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:49.783021   60933 cri.go:89] found id: ""
	I1216 21:03:49.783054   60933 logs.go:282] 0 containers: []
	W1216 21:03:49.783066   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:49.783074   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:49.783144   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:49.820371   60933 cri.go:89] found id: ""
	I1216 21:03:49.820399   60933 logs.go:282] 0 containers: []
	W1216 21:03:49.820409   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:49.820416   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:49.820476   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:49.857918   60933 cri.go:89] found id: ""
	I1216 21:03:49.857948   60933 logs.go:282] 0 containers: []
	W1216 21:03:49.857959   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:49.857967   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:49.858033   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:49.899517   60933 cri.go:89] found id: ""
	I1216 21:03:49.899548   60933 logs.go:282] 0 containers: []
	W1216 21:03:49.899558   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:49.899565   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:49.899632   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:49.938771   60933 cri.go:89] found id: ""
	I1216 21:03:49.938797   60933 logs.go:282] 0 containers: []
	W1216 21:03:49.938805   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:49.938810   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:49.938857   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:49.975748   60933 cri.go:89] found id: ""
	I1216 21:03:49.975781   60933 logs.go:282] 0 containers: []
	W1216 21:03:49.975792   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:49.975800   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:49.975876   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:50.013057   60933 cri.go:89] found id: ""
	I1216 21:03:50.013082   60933 logs.go:282] 0 containers: []
	W1216 21:03:50.013090   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:50.013127   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:50.013178   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:50.049106   60933 cri.go:89] found id: ""
	I1216 21:03:50.049138   60933 logs.go:282] 0 containers: []
	W1216 21:03:50.049150   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:50.049161   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:50.049176   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:50.063815   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:50.063847   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:50.137801   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:50.137826   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:50.137841   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:50.218456   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:50.218495   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:50.263347   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:50.263379   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:52.824077   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:52.838096   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:52.838185   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:52.880550   60933 cri.go:89] found id: ""
	I1216 21:03:52.880582   60933 logs.go:282] 0 containers: []
	W1216 21:03:52.880593   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:52.880600   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:52.880658   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:52.919728   60933 cri.go:89] found id: ""
	I1216 21:03:52.919751   60933 logs.go:282] 0 containers: []
	W1216 21:03:52.919759   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:52.919764   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:52.919819   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:52.957519   60933 cri.go:89] found id: ""
	I1216 21:03:52.957542   60933 logs.go:282] 0 containers: []
	W1216 21:03:52.957549   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:52.957555   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:52.957607   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:52.996631   60933 cri.go:89] found id: ""
	I1216 21:03:52.996663   60933 logs.go:282] 0 containers: []
	W1216 21:03:52.996673   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:52.996681   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:52.996745   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:53.059902   60933 cri.go:89] found id: ""
	I1216 21:03:53.060014   60933 logs.go:282] 0 containers: []
	W1216 21:03:53.060030   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:53.060039   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:53.060105   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:53.099367   60933 cri.go:89] found id: ""
	I1216 21:03:53.099395   60933 logs.go:282] 0 containers: []
	W1216 21:03:53.099406   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:53.099419   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:53.099486   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:53.140668   60933 cri.go:89] found id: ""
	I1216 21:03:53.140696   60933 logs.go:282] 0 containers: []
	W1216 21:03:53.140704   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:53.140709   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:53.140777   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:53.179182   60933 cri.go:89] found id: ""
	I1216 21:03:53.179208   60933 logs.go:282] 0 containers: []
	W1216 21:03:53.179216   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:53.179225   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:53.179236   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:53.233441   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:53.233481   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:53.247526   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:53.247569   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:53.321868   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:53.321895   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:53.321911   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:53.410904   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:53.410959   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:49.936523   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:51.936955   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:54.441538   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:52.319658   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:54.319887   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:54.955490   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:57.456080   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:55.954371   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:55.968506   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:55.968570   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:56.005087   60933 cri.go:89] found id: ""
	I1216 21:03:56.005118   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.005130   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:56.005137   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:56.005205   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:56.039443   60933 cri.go:89] found id: ""
	I1216 21:03:56.039467   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.039475   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:56.039486   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:56.039537   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:56.078181   60933 cri.go:89] found id: ""
	I1216 21:03:56.078213   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.078224   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:56.078231   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:56.078289   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:56.115809   60933 cri.go:89] found id: ""
	I1216 21:03:56.115833   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.115841   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:56.115848   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:56.115901   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:56.154299   60933 cri.go:89] found id: ""
	I1216 21:03:56.154323   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.154330   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:56.154336   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:56.154395   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:56.193069   60933 cri.go:89] found id: ""
	I1216 21:03:56.193098   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.193106   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:56.193112   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:56.193161   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:56.231067   60933 cri.go:89] found id: ""
	I1216 21:03:56.231099   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.231118   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:56.231125   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:56.231191   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:56.270980   60933 cri.go:89] found id: ""
	I1216 21:03:56.271011   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.271022   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:56.271035   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:56.271050   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:56.321374   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:56.321405   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:56.336802   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:56.336847   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:56.414052   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:56.414078   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:56.414091   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:56.499118   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:56.499158   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:59.049386   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:59.063191   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:59.063300   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:59.102136   60933 cri.go:89] found id: ""
	I1216 21:03:59.102169   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.102180   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:59.102187   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:59.102255   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:59.138311   60933 cri.go:89] found id: ""
	I1216 21:03:59.138340   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.138357   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:59.138364   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:59.138431   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:59.176131   60933 cri.go:89] found id: ""
	I1216 21:03:59.176159   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.176169   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:59.176177   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:59.176259   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:59.214274   60933 cri.go:89] found id: ""
	I1216 21:03:59.214308   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.214320   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:59.214327   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:59.214397   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:59.254499   60933 cri.go:89] found id: ""
	I1216 21:03:59.254524   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.254531   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:59.254537   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:59.254602   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:59.292715   60933 cri.go:89] found id: ""
	I1216 21:03:59.292755   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.292765   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:59.292772   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:59.292836   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:59.333279   60933 cri.go:89] found id: ""
	I1216 21:03:59.333314   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.333325   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:59.333332   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:59.333404   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:59.372071   60933 cri.go:89] found id: ""
	I1216 21:03:59.372104   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.372116   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:59.372126   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:59.372143   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:59.389021   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:59.389051   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 21:03:56.936508   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:59.438217   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:56.323300   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:58.819599   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:59.456242   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:01.956873   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	W1216 21:03:59.503281   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:59.503304   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:59.503316   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:59.581761   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:59.581797   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:59.627604   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:59.627646   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:04:02.179425   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:04:02.195786   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:04:02.195850   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:04:02.239763   60933 cri.go:89] found id: ""
	I1216 21:04:02.239790   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.239801   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:04:02.239809   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:04:02.239873   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:04:02.278885   60933 cri.go:89] found id: ""
	I1216 21:04:02.278914   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.278926   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:04:02.278935   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:04:02.279004   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:04:02.320701   60933 cri.go:89] found id: ""
	I1216 21:04:02.320731   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.320742   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:04:02.320749   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:04:02.320811   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:04:02.357726   60933 cri.go:89] found id: ""
	I1216 21:04:02.357757   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.357767   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:04:02.357773   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:04:02.357826   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:04:02.399577   60933 cri.go:89] found id: ""
	I1216 21:04:02.399609   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.399618   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:04:02.399624   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:04:02.399687   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:04:02.445559   60933 cri.go:89] found id: ""
	I1216 21:04:02.445590   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.445600   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:04:02.445607   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:04:02.445670   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:04:02.482983   60933 cri.go:89] found id: ""
	I1216 21:04:02.483015   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.483027   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:04:02.483035   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:04:02.483116   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:04:02.523028   60933 cri.go:89] found id: ""
	I1216 21:04:02.523055   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.523063   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:04:02.523073   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:04:02.523084   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:04:02.577447   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:04:02.577487   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:04:02.594539   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:04:02.594567   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:04:02.683805   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:04:02.683832   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:04:02.683848   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:04:02.763377   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:04:02.763416   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:04:01.937214   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:04.436771   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:01.319860   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:03.320323   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:04.454654   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:06.456145   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:05.311029   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:04:05.328358   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:04:05.328438   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:04:05.367378   60933 cri.go:89] found id: ""
	I1216 21:04:05.367402   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.367409   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:04:05.367419   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:04:05.367468   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:04:05.406268   60933 cri.go:89] found id: ""
	I1216 21:04:05.406291   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.406301   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:04:05.406306   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:04:05.406353   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:04:05.444737   60933 cri.go:89] found id: ""
	I1216 21:04:05.444767   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.444778   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:04:05.444787   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:04:05.444836   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:04:05.484044   60933 cri.go:89] found id: ""
	I1216 21:04:05.484132   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.484153   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:04:05.484161   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:04:05.484222   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:04:05.523395   60933 cri.go:89] found id: ""
	I1216 21:04:05.523420   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.523431   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:04:05.523439   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:04:05.523501   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:04:05.566925   60933 cri.go:89] found id: ""
	I1216 21:04:05.566954   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.566967   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:04:05.566974   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:04:05.567036   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:04:05.611275   60933 cri.go:89] found id: ""
	I1216 21:04:05.611303   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.611314   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:04:05.611321   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:04:05.611396   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:04:05.650340   60933 cri.go:89] found id: ""
	I1216 21:04:05.650371   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.650379   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:04:05.650389   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:04:05.650400   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:04:05.702277   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:04:05.702321   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:04:05.718685   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:04:05.718713   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:04:05.794979   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:04:05.795005   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:04:05.795020   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:04:05.897348   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:04:05.897383   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:04:08.447268   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:04:08.462553   60933 kubeadm.go:597] duration metric: took 4m2.545161532s to restartPrimaryControlPlane
	W1216 21:04:08.462621   60933 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 21:04:08.462650   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 21:04:06.437699   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:08.936904   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:05.813413   60829 pod_ready.go:82] duration metric: took 4m0.000648161s for pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace to be "Ready" ...
	E1216 21:04:05.813448   60829 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace to be "Ready" (will not retry!)
	I1216 21:04:05.813472   60829 pod_ready.go:39] duration metric: took 4m14.577422135s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:04:05.813498   60829 kubeadm.go:597] duration metric: took 4m22.010606819s to restartPrimaryControlPlane
	W1216 21:04:05.813559   60829 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 21:04:05.813593   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 21:04:10.315541   60933 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.85286561s)
	I1216 21:04:10.315622   60933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:04:10.330937   60933 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 21:04:10.343702   60933 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:04:10.356498   60933 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:04:10.356526   60933 kubeadm.go:157] found existing configuration files:
	
	I1216 21:04:10.356579   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 21:04:10.367777   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:04:10.367847   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:04:10.379109   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 21:04:10.389258   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:04:10.389313   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:04:10.399959   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 21:04:10.410664   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:04:10.410734   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:04:10.423138   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 21:04:10.433922   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:04:10.433976   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:04:10.445297   60933 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 21:04:10.524236   60933 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1216 21:04:10.524344   60933 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 21:04:10.680331   60933 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 21:04:10.680489   60933 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 21:04:10.680641   60933 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1216 21:04:10.877305   60933 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 21:04:10.879375   60933 out.go:235]   - Generating certificates and keys ...
	I1216 21:04:10.879496   60933 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 21:04:10.879567   60933 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 21:04:10.879647   60933 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 21:04:10.879748   60933 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 21:04:10.879865   60933 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 21:04:10.880127   60933 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 21:04:10.881047   60933 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 21:04:10.881874   60933 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 21:04:10.882778   60933 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 21:04:10.883678   60933 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 21:04:10.884029   60933 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 21:04:10.884130   60933 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 21:04:11.034011   60933 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 21:04:11.273509   60933 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 21:04:11.477553   60933 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 21:04:11.542158   60933 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 21:04:11.565791   60933 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 21:04:11.567317   60933 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 21:04:11.567409   60933 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 21:04:11.763223   60933 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 21:04:08.955135   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:10.957061   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:11.766107   60933 out.go:235]   - Booting up control plane ...
	I1216 21:04:11.766257   60933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 21:04:11.766367   60933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 21:04:11.768484   60933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 21:04:11.773601   60933 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 21:04:11.780554   60933 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1216 21:04:11.436931   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:13.437532   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:13.455175   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:15.455370   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:17.456801   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:15.936107   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:17.937233   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:17.949449   60421 pod_ready.go:82] duration metric: took 4m0.000885381s for pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace to be "Ready" ...
	E1216 21:04:17.949484   60421 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace to be "Ready" (will not retry!)
	I1216 21:04:17.949501   60421 pod_ready.go:39] duration metric: took 4m10.554596731s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:04:17.949525   60421 kubeadm.go:597] duration metric: took 4m42.414672113s to restartPrimaryControlPlane
	W1216 21:04:17.949588   60421 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 21:04:17.949619   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 21:04:19.938104   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:22.436710   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:24.936550   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:26.936809   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:29.437478   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:33.833179   60829 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (28.019561403s)
	I1216 21:04:33.833265   60829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:04:33.850170   60829 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 21:04:33.862112   60829 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:04:33.873752   60829 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:04:33.873777   60829 kubeadm.go:157] found existing configuration files:
	
	I1216 21:04:33.873832   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1216 21:04:33.885038   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:04:33.885115   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:04:33.897352   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1216 21:04:33.911055   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:04:33.911115   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:04:33.925077   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1216 21:04:33.938925   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:04:33.938997   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:04:33.952022   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1216 21:04:33.963099   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:04:33.963176   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:04:33.974080   60829 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 21:04:34.031525   60829 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I1216 21:04:34.031643   60829 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 21:04:34.153173   60829 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 21:04:34.153340   60829 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 21:04:34.153453   60829 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 21:04:34.166258   60829 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 21:04:31.936620   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:33.938157   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:34.168275   60829 out.go:235]   - Generating certificates and keys ...
	I1216 21:04:34.168388   60829 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 21:04:34.168463   60829 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 21:04:34.168545   60829 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 21:04:34.168633   60829 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 21:04:34.168740   60829 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 21:04:34.168837   60829 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 21:04:34.168934   60829 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 21:04:34.169020   60829 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 21:04:34.169119   60829 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 21:04:34.169222   60829 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 21:04:34.169278   60829 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 21:04:34.169365   60829 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 21:04:34.277660   60829 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 21:04:34.526364   60829 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 21:04:34.629728   60829 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 21:04:34.757824   60829 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 21:04:34.838922   60829 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 21:04:34.839431   60829 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 21:04:34.841925   60829 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 21:04:34.843761   60829 out.go:235]   - Booting up control plane ...
	I1216 21:04:34.843874   60829 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 21:04:34.843945   60829 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 21:04:34.846919   60829 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 21:04:34.866038   60829 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 21:04:34.875031   60829 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 21:04:34.875112   60829 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 21:04:35.016713   60829 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 21:04:35.016879   60829 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 21:04:36.437043   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:38.437584   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:36.017947   60829 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001159452s
	I1216 21:04:36.018086   60829 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1216 21:04:40.519460   60829 kubeadm.go:310] [api-check] The API server is healthy after 4.501460025s
	I1216 21:04:40.533680   60829 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 21:04:40.552611   60829 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 21:04:40.585691   60829 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 21:04:40.585905   60829 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-327790 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 21:04:40.613752   60829 kubeadm.go:310] [bootstrap-token] Using token: w829op.p4bszg1q76emsxit
	I1216 21:04:40.615428   60829 out.go:235]   - Configuring RBAC rules ...
	I1216 21:04:40.615556   60829 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 21:04:40.629296   60829 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 21:04:40.638449   60829 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 21:04:40.644143   60829 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 21:04:40.648665   60829 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 21:04:40.653151   60829 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 21:04:40.926399   60829 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 21:04:41.370569   60829 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1216 21:04:41.927555   60829 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1216 21:04:41.928692   60829 kubeadm.go:310] 
	I1216 21:04:41.928769   60829 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1216 21:04:41.928779   60829 kubeadm.go:310] 
	I1216 21:04:41.928851   60829 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1216 21:04:41.928878   60829 kubeadm.go:310] 
	I1216 21:04:41.928928   60829 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1216 21:04:41.929005   60829 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 21:04:41.929053   60829 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 21:04:41.929060   60829 kubeadm.go:310] 
	I1216 21:04:41.929107   60829 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1216 21:04:41.929114   60829 kubeadm.go:310] 
	I1216 21:04:41.929151   60829 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 21:04:41.929157   60829 kubeadm.go:310] 
	I1216 21:04:41.929205   60829 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1216 21:04:41.929264   60829 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 21:04:41.929325   60829 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 21:04:41.929354   60829 kubeadm.go:310] 
	I1216 21:04:41.929527   60829 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 21:04:41.929657   60829 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1216 21:04:41.929674   60829 kubeadm.go:310] 
	I1216 21:04:41.929787   60829 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token w829op.p4bszg1q76emsxit \
	I1216 21:04:41.929941   60829 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e03b60b144334bf383a3d22daeca854a6b4004373f1847ba3afcb85a998b5735 \
	I1216 21:04:41.929975   60829 kubeadm.go:310] 	--control-plane 
	I1216 21:04:41.929984   60829 kubeadm.go:310] 
	I1216 21:04:41.930103   60829 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1216 21:04:41.930124   60829 kubeadm.go:310] 
	I1216 21:04:41.930245   60829 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token w829op.p4bszg1q76emsxit \
	I1216 21:04:41.930378   60829 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e03b60b144334bf383a3d22daeca854a6b4004373f1847ba3afcb85a998b5735 
	I1216 21:04:41.931554   60829 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 21:04:41.931685   60829 cni.go:84] Creating CNI manager for ""
	I1216 21:04:41.931699   60829 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:04:41.933748   60829 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 21:04:40.937882   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:43.436864   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:41.935317   60829 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 21:04:41.947502   60829 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 21:04:41.976180   60829 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 21:04:41.976288   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:41.976323   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-327790 minikube.k8s.io/updated_at=2024_12_16T21_04_41_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=74e51ab701402ddc00f8ba70f2a2775c7dcd6477 minikube.k8s.io/name=default-k8s-diff-port-327790 minikube.k8s.io/primary=true
	I1216 21:04:42.010154   60829 ops.go:34] apiserver oom_adj: -16
	I1216 21:04:42.181919   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:42.682201   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:43.182557   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:43.682418   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:44.182318   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:44.682793   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:45.182342   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:45.682678   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:45.777484   60829 kubeadm.go:1113] duration metric: took 3.801254961s to wait for elevateKubeSystemPrivileges
	I1216 21:04:45.777522   60829 kubeadm.go:394] duration metric: took 5m2.030533321s to StartCluster
	I1216 21:04:45.777543   60829 settings.go:142] acquiring lock: {Name:mke62e1d1fa6bfae09410847a3fc6f95d0bbbd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:04:45.777644   60829 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 21:04:45.780034   60829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/kubeconfig: {Name:mk67073c6dc9abd712825d4490d6430745897f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:04:45.780369   60829 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.162 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 21:04:45.780450   60829 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 21:04:45.780566   60829 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-327790"
	I1216 21:04:45.780579   60829 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-327790"
	I1216 21:04:45.780595   60829 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-327790"
	W1216 21:04:45.780606   60829 addons.go:243] addon storage-provisioner should already be in state true
	I1216 21:04:45.780599   60829 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-327790"
	I1216 21:04:45.780609   60829 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-327790"
	I1216 21:04:45.780627   60829 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-327790"
	I1216 21:04:45.780627   60829 config.go:182] Loaded profile config "default-k8s-diff-port-327790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	W1216 21:04:45.780638   60829 addons.go:243] addon metrics-server should already be in state true
	I1216 21:04:45.780648   60829 host.go:66] Checking if "default-k8s-diff-port-327790" exists ...
	I1216 21:04:45.780675   60829 host.go:66] Checking if "default-k8s-diff-port-327790" exists ...
	I1216 21:04:45.781091   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:45.781091   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:45.781132   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:45.781136   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:45.781174   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:45.781137   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:45.782022   60829 out.go:177] * Verifying Kubernetes components...
	I1216 21:04:45.783549   60829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 21:04:45.799326   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42295
	I1216 21:04:45.799443   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36833
	I1216 21:04:45.799865   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:45.800491   60829 main.go:141] libmachine: Using API Version  1
	I1216 21:04:45.800510   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:45.800588   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:45.801082   60829 main.go:141] libmachine: Using API Version  1
	I1216 21:04:45.801102   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:45.801178   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37413
	I1216 21:04:45.801202   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:45.801517   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:45.801539   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:45.801707   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetState
	I1216 21:04:45.801925   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:45.801959   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:45.801974   60829 main.go:141] libmachine: Using API Version  1
	I1216 21:04:45.801992   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:45.802319   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:45.802817   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:45.802857   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:45.805750   60829 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-327790"
	W1216 21:04:45.805775   60829 addons.go:243] addon default-storageclass should already be in state true
	I1216 21:04:45.805806   60829 host.go:66] Checking if "default-k8s-diff-port-327790" exists ...
	I1216 21:04:45.806153   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:45.806193   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:45.820545   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46261
	I1216 21:04:45.821062   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:45.821598   60829 main.go:141] libmachine: Using API Version  1
	I1216 21:04:45.821625   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:45.822086   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:45.822294   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetState
	I1216 21:04:45.823995   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 21:04:45.824775   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40091
	I1216 21:04:45.825269   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:45.825754   60829 main.go:141] libmachine: Using API Version  1
	I1216 21:04:45.825778   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:45.826117   60829 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1216 21:04:45.826158   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:45.826843   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:45.826892   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:45.827527   60829 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1216 21:04:45.827557   60829 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1216 21:04:45.827577   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 21:04:45.829352   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34899
	I1216 21:04:45.829769   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:45.830197   60829 main.go:141] libmachine: Using API Version  1
	I1216 21:04:45.830217   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:45.830543   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:45.830767   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetState
	I1216 21:04:45.831413   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 21:04:45.832010   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 21:04:45.832030   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 21:04:45.832202   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 21:04:45.832424   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 21:04:45.832496   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 21:04:45.832847   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 21:04:45.833056   60829 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 21:04:45.834475   60829 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 21:04:45.835944   60829 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 21:04:45.835965   60829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 21:04:45.835983   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 21:04:45.839118   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 21:04:45.839533   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 21:04:45.839560   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 21:04:45.839744   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 21:04:45.839947   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 21:04:45.840087   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 21:04:45.840218   60829 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 21:04:45.845365   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37995
	I1216 21:04:45.845925   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:45.847042   60829 main.go:141] libmachine: Using API Version  1
	I1216 21:04:45.847060   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:45.847450   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:45.847669   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetState
	I1216 21:04:45.849934   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 21:04:45.850165   60829 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 21:04:45.850182   60829 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 21:04:45.850199   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 21:04:45.853083   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 21:04:45.853493   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 21:04:45.853518   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 21:04:45.853679   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 21:04:45.853848   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 21:04:45.854024   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 21:04:45.854177   60829 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 21:04:45.978935   60829 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 21:04:46.010601   60829 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-327790" to be "Ready" ...
	I1216 21:04:46.019674   60829 node_ready.go:49] node "default-k8s-diff-port-327790" has status "Ready":"True"
	I1216 21:04:46.019704   60829 node_ready.go:38] duration metric: took 9.066576ms for node "default-k8s-diff-port-327790" to be "Ready" ...
	I1216 21:04:46.019715   60829 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:04:46.033957   60829 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:46.103779   60829 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1216 21:04:46.103812   60829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1216 21:04:46.120299   60829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 21:04:46.171131   60829 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1216 21:04:46.171171   60829 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1216 21:04:46.171280   60829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 21:04:46.244556   60829 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 21:04:46.244587   60829 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1216 21:04:46.332646   60829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 21:04:47.461793   60829 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.34145582s)
	I1216 21:04:47.461871   60829 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.129193295s)
	I1216 21:04:47.461793   60829 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.290486436s)
	I1216 21:04:47.461899   60829 main.go:141] libmachine: Making call to close driver server
	I1216 21:04:47.461913   60829 main.go:141] libmachine: Making call to close driver server
	I1216 21:04:47.461918   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Close
	I1216 21:04:47.461875   60829 main.go:141] libmachine: Making call to close driver server
	I1216 21:04:47.461982   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Close
	I1216 21:04:47.461927   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Close
	I1216 21:04:47.462463   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Closing plugin on server side
	I1216 21:04:47.462469   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Closing plugin on server side
	I1216 21:04:47.462480   60829 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:04:47.462488   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Closing plugin on server side
	I1216 21:04:47.462494   60829 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:04:47.462504   60829 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:04:47.462506   60829 main.go:141] libmachine: Making call to close driver server
	I1216 21:04:47.462511   60829 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:04:47.462516   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Close
	I1216 21:04:47.462521   60829 main.go:141] libmachine: Making call to close driver server
	I1216 21:04:47.462529   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Close
	I1216 21:04:47.462556   60829 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:04:47.462573   60829 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:04:47.462581   60829 main.go:141] libmachine: Making call to close driver server
	I1216 21:04:47.462588   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Close
	I1216 21:04:47.462805   60829 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:04:47.462816   60829 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:04:47.462816   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Closing plugin on server side
	I1216 21:04:47.462827   60829 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-327790"
	I1216 21:04:47.462841   60829 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:04:47.462848   60829 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:04:47.463049   60829 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:04:47.463067   60829 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:04:47.524466   60829 main.go:141] libmachine: Making call to close driver server
	I1216 21:04:47.524497   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Close
	I1216 21:04:47.524822   60829 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:04:47.524843   60829 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:04:47.524869   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Closing plugin on server side
	I1216 21:04:47.526679   60829 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I1216 21:04:45.861404   60421 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.911759863s)
	I1216 21:04:45.861483   60421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:04:45.889560   60421 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 21:04:45.922090   60421 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:04:45.945227   60421 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:04:45.945261   60421 kubeadm.go:157] found existing configuration files:
	
	I1216 21:04:45.945306   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 21:04:45.960594   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:04:45.960666   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:04:45.980613   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 21:04:46.005349   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:04:46.005431   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:04:46.021683   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 21:04:46.032967   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:04:46.033042   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:04:46.064718   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 21:04:46.078736   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:04:46.078805   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:04:46.092798   60421 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 21:04:46.293434   60421 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 21:04:45.430910   60215 pod_ready.go:82] duration metric: took 4m0.000948437s for pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace to be "Ready" ...
	E1216 21:04:45.430950   60215 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace to be "Ready" (will not retry!)
	I1216 21:04:45.430970   60215 pod_ready.go:39] duration metric: took 4m12.926677248s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:04:45.431002   60215 kubeadm.go:597] duration metric: took 4m20.847109652s to restartPrimaryControlPlane
	W1216 21:04:45.431059   60215 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 21:04:45.431092   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 21:04:47.527909   60829 addons.go:510] duration metric: took 1.747463467s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I1216 21:04:48.047956   60829 pod_ready.go:103] pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:51.781856   60933 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1216 21:04:51.782285   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:04:51.782543   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:04:54.704462   60421 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I1216 21:04:54.704514   60421 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 21:04:54.704600   60421 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 21:04:54.704736   60421 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 21:04:54.704839   60421 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 21:04:54.704894   60421 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 21:04:54.706650   60421 out.go:235]   - Generating certificates and keys ...
	I1216 21:04:54.706771   60421 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 21:04:54.706865   60421 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 21:04:54.706999   60421 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 21:04:54.707113   60421 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 21:04:54.707256   60421 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 21:04:54.707344   60421 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 21:04:54.707478   60421 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 21:04:54.707573   60421 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 21:04:54.707683   60421 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 21:04:54.707806   60421 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 21:04:54.707851   60421 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 21:04:54.707902   60421 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 21:04:54.707968   60421 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 21:04:54.708056   60421 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 21:04:54.708127   60421 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 21:04:54.708225   60421 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 21:04:54.708305   60421 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 21:04:54.708427   60421 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 21:04:54.708526   60421 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 21:04:54.710014   60421 out.go:235]   - Booting up control plane ...
	I1216 21:04:54.710113   60421 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 21:04:54.710197   60421 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 21:04:54.710254   60421 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 21:04:54.710361   60421 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 21:04:54.710457   60421 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 21:04:54.710511   60421 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 21:04:54.710670   60421 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 21:04:54.710792   60421 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 21:04:54.710852   60421 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.532878ms
	I1216 21:04:54.710912   60421 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1216 21:04:54.710982   60421 kubeadm.go:310] [api-check] The API server is healthy after 5.50189872s
	I1216 21:04:54.711125   60421 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 21:04:54.711281   60421 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 21:04:54.711362   60421 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 21:04:54.711618   60421 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-232338 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 21:04:54.711712   60421 kubeadm.go:310] [bootstrap-token] Using token: knn1cl.i9horbjuutctjfyf
	I1216 21:04:54.714363   60421 out.go:235]   - Configuring RBAC rules ...
	I1216 21:04:54.714488   60421 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 21:04:54.714560   60421 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 21:04:54.714674   60421 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 21:04:54.714820   60421 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 21:04:54.714914   60421 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 21:04:54.714981   60421 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 21:04:54.715083   60421 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 21:04:54.715159   60421 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1216 21:04:54.715228   60421 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1216 21:04:54.715237   60421 kubeadm.go:310] 
	I1216 21:04:54.715345   60421 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1216 21:04:54.715359   60421 kubeadm.go:310] 
	I1216 21:04:54.715455   60421 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1216 21:04:54.715463   60421 kubeadm.go:310] 
	I1216 21:04:54.715510   60421 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1216 21:04:54.715596   60421 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 21:04:54.715669   60421 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 21:04:54.715679   60421 kubeadm.go:310] 
	I1216 21:04:54.715767   60421 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1216 21:04:54.715775   60421 kubeadm.go:310] 
	I1216 21:04:54.715842   60421 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 21:04:54.715851   60421 kubeadm.go:310] 
	I1216 21:04:54.715908   60421 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1216 21:04:54.715969   60421 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 21:04:54.716026   60421 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 21:04:54.716032   60421 kubeadm.go:310] 
	I1216 21:04:54.716106   60421 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 21:04:54.716171   60421 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1216 21:04:54.716177   60421 kubeadm.go:310] 
	I1216 21:04:54.716240   60421 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token knn1cl.i9horbjuutctjfyf \
	I1216 21:04:54.716340   60421 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e03b60b144334bf383a3d22daeca854a6b4004373f1847ba3afcb85a998b5735 \
	I1216 21:04:54.716374   60421 kubeadm.go:310] 	--control-plane 
	I1216 21:04:54.716384   60421 kubeadm.go:310] 
	I1216 21:04:54.716457   60421 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1216 21:04:54.716467   60421 kubeadm.go:310] 
	I1216 21:04:54.716534   60421 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token knn1cl.i9horbjuutctjfyf \
	I1216 21:04:54.716634   60421 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e03b60b144334bf383a3d22daeca854a6b4004373f1847ba3afcb85a998b5735 
	I1216 21:04:54.716644   60421 cni.go:84] Creating CNI manager for ""
	I1216 21:04:54.716651   60421 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:04:54.718260   60421 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 21:04:50.542207   60829 pod_ready.go:103] pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:52.542453   60829 pod_ready.go:103] pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:55.040960   60829 pod_ready.go:103] pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:56.042145   60829 pod_ready.go:93] pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:04:56.042175   60829 pod_ready.go:82] duration metric: took 10.008191514s for pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:56.042192   60829 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:56.047996   60829 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:04:56.048022   60829 pod_ready.go:82] duration metric: took 5.821217ms for pod "kube-apiserver-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:56.048031   60829 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:56.052582   60829 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:04:56.052608   60829 pod_ready.go:82] duration metric: took 4.569092ms for pod "kube-controller-manager-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:56.052619   60829 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:56.056805   60829 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:04:56.056834   60829 pod_ready.go:82] duration metric: took 4.206726ms for pod "kube-scheduler-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:56.056841   60829 pod_ready.go:39] duration metric: took 10.037112061s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:04:56.056855   60829 api_server.go:52] waiting for apiserver process to appear ...
	I1216 21:04:56.056904   60829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:04:56.076993   60829 api_server.go:72] duration metric: took 10.296578804s to wait for apiserver process to appear ...
	I1216 21:04:56.077023   60829 api_server.go:88] waiting for apiserver healthz status ...
	I1216 21:04:56.077045   60829 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1216 21:04:56.082250   60829 api_server.go:279] https://192.168.39.162:8444/healthz returned 200:
	ok
	I1216 21:04:56.083348   60829 api_server.go:141] control plane version: v1.32.0
	I1216 21:04:56.083369   60829 api_server.go:131] duration metric: took 6.339438ms to wait for apiserver health ...
	I1216 21:04:56.083377   60829 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 21:04:56.090255   60829 system_pods.go:59] 9 kube-system pods found
	I1216 21:04:56.090290   60829 system_pods.go:61] "coredns-668d6bf9bc-2qcfx" [4ac98efa-96ff-4564-93de-4a61de7a6507] Running
	I1216 21:04:56.090302   60829 system_pods.go:61] "coredns-668d6bf9bc-fb7wx" [f2f2c0e7-893f-45ba-8da9-3b03f5560d89] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:04:56.090310   60829 system_pods.go:61] "etcd-default-k8s-diff-port-327790" [5363e160-ef78-4737-89f9-5f4d0f0eab95] Running
	I1216 21:04:56.090318   60829 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-327790" [b53c6be6-e476-4a5a-80c2-96e701736820] Running
	I1216 21:04:56.090324   60829 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-327790" [57d8747a-7258-48c3-9bcd-6fedaa8b7431] Running
	I1216 21:04:56.090329   60829 system_pods.go:61] "kube-proxy-njqp8" [e5f1789d-b343-4c2e-b078-4a15f4b18569] Running
	I1216 21:04:56.090334   60829 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-327790" [e2303bbd-b9d9-4392-867f-6f5f43f74826] Running
	I1216 21:04:56.090342   60829 system_pods.go:61] "metrics-server-f79f97bbb-84xtf" [569c6717-dc12-474f-8156-d2dd9e410a54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:04:56.090349   60829 system_pods.go:61] "storage-provisioner" [4e5b12f0-3d96-4dd0-81e7-300b82058d47] Running
	I1216 21:04:56.090360   60829 system_pods.go:74] duration metric: took 6.975795ms to wait for pod list to return data ...
	I1216 21:04:56.090373   60829 default_sa.go:34] waiting for default service account to be created ...
	I1216 21:04:56.093967   60829 default_sa.go:45] found service account: "default"
	I1216 21:04:56.093998   60829 default_sa.go:55] duration metric: took 3.616693ms for default service account to be created ...
	I1216 21:04:56.094010   60829 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 21:04:56.241532   60829 system_pods.go:86] 9 kube-system pods found
	I1216 21:04:56.241568   60829 system_pods.go:89] "coredns-668d6bf9bc-2qcfx" [4ac98efa-96ff-4564-93de-4a61de7a6507] Running
	I1216 21:04:56.241582   60829 system_pods.go:89] "coredns-668d6bf9bc-fb7wx" [f2f2c0e7-893f-45ba-8da9-3b03f5560d89] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:04:56.241589   60829 system_pods.go:89] "etcd-default-k8s-diff-port-327790" [5363e160-ef78-4737-89f9-5f4d0f0eab95] Running
	I1216 21:04:56.241597   60829 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-327790" [b53c6be6-e476-4a5a-80c2-96e701736820] Running
	I1216 21:04:56.241605   60829 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-327790" [57d8747a-7258-48c3-9bcd-6fedaa8b7431] Running
	I1216 21:04:56.241611   60829 system_pods.go:89] "kube-proxy-njqp8" [e5f1789d-b343-4c2e-b078-4a15f4b18569] Running
	I1216 21:04:56.241617   60829 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-327790" [e2303bbd-b9d9-4392-867f-6f5f43f74826] Running
	I1216 21:04:56.241624   60829 system_pods.go:89] "metrics-server-f79f97bbb-84xtf" [569c6717-dc12-474f-8156-d2dd9e410a54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:04:56.241629   60829 system_pods.go:89] "storage-provisioner" [4e5b12f0-3d96-4dd0-81e7-300b82058d47] Running
	I1216 21:04:56.241639   60829 system_pods.go:126] duration metric: took 147.621114ms to wait for k8s-apps to be running ...
	I1216 21:04:56.241656   60829 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 21:04:56.241730   60829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:04:56.258891   60829 system_svc.go:56] duration metric: took 17.227056ms WaitForService to wait for kubelet
	I1216 21:04:56.258935   60829 kubeadm.go:582] duration metric: took 10.478521255s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 21:04:56.258962   60829 node_conditions.go:102] verifying NodePressure condition ...
	I1216 21:04:56.438641   60829 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 21:04:56.438667   60829 node_conditions.go:123] node cpu capacity is 2
	I1216 21:04:56.438679   60829 node_conditions.go:105] duration metric: took 179.711624ms to run NodePressure ...
	I1216 21:04:56.438692   60829 start.go:241] waiting for startup goroutines ...
	I1216 21:04:56.438700   60829 start.go:246] waiting for cluster config update ...
	I1216 21:04:56.438714   60829 start.go:255] writing updated cluster config ...
	I1216 21:04:56.438975   60829 ssh_runner.go:195] Run: rm -f paused
	I1216 21:04:56.490195   60829 start.go:600] kubectl: 1.32.0, cluster: 1.32.0 (minor skew: 0)
	I1216 21:04:56.492395   60829 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-327790" cluster and "default" namespace by default
	I1216 21:04:54.719483   60421 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 21:04:54.732035   60421 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 21:04:54.754010   60421 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 21:04:54.754122   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:54.754177   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-232338 minikube.k8s.io/updated_at=2024_12_16T21_04_54_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=74e51ab701402ddc00f8ba70f2a2775c7dcd6477 minikube.k8s.io/name=no-preload-232338 minikube.k8s.io/primary=true
	I1216 21:04:54.773008   60421 ops.go:34] apiserver oom_adj: -16
	I1216 21:04:55.009573   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:55.510039   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:56.009645   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:56.509608   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:57.009714   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:57.509902   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:58.009901   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:58.509631   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:58.632896   60421 kubeadm.go:1113] duration metric: took 3.878846316s to wait for elevateKubeSystemPrivileges
	I1216 21:04:58.632933   60421 kubeadm.go:394] duration metric: took 5m23.15322559s to StartCluster
	I1216 21:04:58.632951   60421 settings.go:142] acquiring lock: {Name:mke62e1d1fa6bfae09410847a3fc6f95d0bbbd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:04:58.633031   60421 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 21:04:58.635409   60421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/kubeconfig: {Name:mk67073c6dc9abd712825d4490d6430745897f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:04:58.635720   60421 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.240 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 21:04:58.635835   60421 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 21:04:58.635944   60421 addons.go:69] Setting storage-provisioner=true in profile "no-preload-232338"
	I1216 21:04:58.635958   60421 config.go:182] Loaded profile config "no-preload-232338": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 21:04:58.635966   60421 addons.go:234] Setting addon storage-provisioner=true in "no-preload-232338"
	I1216 21:04:58.635969   60421 addons.go:69] Setting default-storageclass=true in profile "no-preload-232338"
	W1216 21:04:58.635975   60421 addons.go:243] addon storage-provisioner should already be in state true
	I1216 21:04:58.635986   60421 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-232338"
	I1216 21:04:58.636005   60421 host.go:66] Checking if "no-preload-232338" exists ...
	I1216 21:04:58.635997   60421 addons.go:69] Setting metrics-server=true in profile "no-preload-232338"
	I1216 21:04:58.636029   60421 addons.go:234] Setting addon metrics-server=true in "no-preload-232338"
	W1216 21:04:58.636038   60421 addons.go:243] addon metrics-server should already be in state true
	I1216 21:04:58.636069   60421 host.go:66] Checking if "no-preload-232338" exists ...
	I1216 21:04:58.636428   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:58.636460   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:58.636428   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:58.636513   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:58.636532   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:58.636549   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:58.637558   60421 out.go:177] * Verifying Kubernetes components...
	I1216 21:04:58.639254   60421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 21:04:58.652770   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43305
	I1216 21:04:58.652789   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35093
	I1216 21:04:58.653247   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:58.653368   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:58.653818   60421 main.go:141] libmachine: Using API Version  1
	I1216 21:04:58.653836   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:58.653944   60421 main.go:141] libmachine: Using API Version  1
	I1216 21:04:58.653963   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:58.654562   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:58.654565   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:58.654775   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetState
	I1216 21:04:58.655078   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:58.655117   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:58.656383   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38087
	I1216 21:04:58.656987   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:58.657520   60421 main.go:141] libmachine: Using API Version  1
	I1216 21:04:58.657553   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:58.657933   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:58.658517   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:58.658566   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:58.658692   60421 addons.go:234] Setting addon default-storageclass=true in "no-preload-232338"
	W1216 21:04:58.658708   60421 addons.go:243] addon default-storageclass should already be in state true
	I1216 21:04:58.658737   60421 host.go:66] Checking if "no-preload-232338" exists ...
	I1216 21:04:58.659001   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:58.659043   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:58.672942   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34153
	I1216 21:04:58.673478   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:58.674034   60421 main.go:141] libmachine: Using API Version  1
	I1216 21:04:58.674063   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:58.674421   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:58.674594   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37541
	I1216 21:04:58.674614   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetState
	I1216 21:04:58.674994   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:58.675686   60421 main.go:141] libmachine: Using API Version  1
	I1216 21:04:58.675699   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:58.676334   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:58.676480   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 21:04:58.676898   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:58.676931   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:58.679230   60421 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1216 21:04:58.680032   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33309
	I1216 21:04:58.680609   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:58.680754   60421 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1216 21:04:58.680772   60421 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1216 21:04:58.680794   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 21:04:58.681202   60421 main.go:141] libmachine: Using API Version  1
	I1216 21:04:58.681221   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:58.681610   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:58.681815   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetState
	I1216 21:04:58.683608   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 21:04:58.684266   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 21:04:58.684765   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 21:04:58.684793   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 21:04:58.684925   60421 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 21:04:56.783069   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:04:56.783323   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:04:58.684932   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 21:04:58.685156   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 21:04:58.685321   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 21:04:58.685515   60421 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 21:04:58.686360   60421 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 21:04:58.686379   60421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 21:04:58.686396   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 21:04:58.689909   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 21:04:58.690365   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 21:04:58.690392   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 21:04:58.690698   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 21:04:58.690927   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 21:04:58.691095   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 21:04:58.691305   60421 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 21:04:58.695899   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36017
	I1216 21:04:58.696274   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:58.696758   60421 main.go:141] libmachine: Using API Version  1
	I1216 21:04:58.696777   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:58.697064   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:58.697225   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetState
	I1216 21:04:58.698530   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 21:04:58.698751   60421 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 21:04:58.698766   60421 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 21:04:58.698784   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 21:04:58.701986   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 21:04:58.702420   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 21:04:58.702473   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 21:04:58.702655   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 21:04:58.702839   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 21:04:58.702979   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 21:04:58.703197   60421 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 21:04:58.866115   60421 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 21:04:58.892287   60421 node_ready.go:35] waiting up to 6m0s for node "no-preload-232338" to be "Ready" ...
	I1216 21:04:58.949580   60421 node_ready.go:49] node "no-preload-232338" has status "Ready":"True"
	I1216 21:04:58.949610   60421 node_ready.go:38] duration metric: took 57.274849ms for node "no-preload-232338" to be "Ready" ...
	I1216 21:04:58.949622   60421 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:04:58.983955   60421 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-4wwvd" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:59.036124   60421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 21:04:59.039113   60421 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1216 21:04:59.039139   60421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1216 21:04:59.087493   60421 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1216 21:04:59.087531   60421 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1216 21:04:59.094976   60421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 21:04:59.129816   60421 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 21:04:59.129840   60421 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1216 21:04:59.236390   60421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 21:05:00.157688   60421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.121522553s)
	I1216 21:05:00.157736   60421 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:00.157751   60421 main.go:141] libmachine: (no-preload-232338) Calling .Close
	I1216 21:05:00.157764   60421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.06274536s)
	I1216 21:05:00.157830   60421 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:00.157851   60421 main.go:141] libmachine: (no-preload-232338) Calling .Close
	I1216 21:05:00.158259   60421 main.go:141] libmachine: (no-preload-232338) DBG | Closing plugin on server side
	I1216 21:05:00.158270   60421 main.go:141] libmachine: (no-preload-232338) DBG | Closing plugin on server side
	I1216 21:05:00.158282   60421 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:00.158288   60421 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:00.158297   60421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:00.158314   60421 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:00.158327   60421 main.go:141] libmachine: (no-preload-232338) Calling .Close
	I1216 21:05:00.158319   60421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:00.158344   60421 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:00.158352   60421 main.go:141] libmachine: (no-preload-232338) Calling .Close
	I1216 21:05:00.158589   60421 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:00.158604   60421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:00.158624   60421 main.go:141] libmachine: (no-preload-232338) DBG | Closing plugin on server side
	I1216 21:05:00.158589   60421 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:00.158655   60421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:00.182819   60421 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:00.182844   60421 main.go:141] libmachine: (no-preload-232338) Calling .Close
	I1216 21:05:00.183229   60421 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:00.183271   60421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:00.679810   60421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.44337328s)
	I1216 21:05:00.679867   60421 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:00.679880   60421 main.go:141] libmachine: (no-preload-232338) Calling .Close
	I1216 21:05:00.680233   60421 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:00.680254   60421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:00.680266   60421 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:00.680274   60421 main.go:141] libmachine: (no-preload-232338) Calling .Close
	I1216 21:05:00.680612   60421 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:00.680632   60421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:00.680643   60421 addons.go:475] Verifying addon metrics-server=true in "no-preload-232338"
	I1216 21:05:00.680659   60421 main.go:141] libmachine: (no-preload-232338) DBG | Closing plugin on server side
	I1216 21:05:00.682400   60421 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1216 21:05:00.684226   60421 addons.go:510] duration metric: took 2.048395371s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1216 21:05:00.997599   60421 pod_ready.go:103] pod "coredns-668d6bf9bc-4wwvd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:05:01.990706   60421 pod_ready.go:93] pod "coredns-668d6bf9bc-4wwvd" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:01.990733   60421 pod_ready.go:82] duration metric: took 3.006750411s for pod "coredns-668d6bf9bc-4wwvd" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:01.990742   60421 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-c4qfj" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:03.998055   60421 pod_ready.go:103] pod "coredns-668d6bf9bc-c4qfj" in "kube-system" namespace has status "Ready":"False"
	I1216 21:05:05.997310   60421 pod_ready.go:93] pod "coredns-668d6bf9bc-c4qfj" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:05.997334   60421 pod_ready.go:82] duration metric: took 4.006586503s for pod "coredns-668d6bf9bc-c4qfj" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:05.997346   60421 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.002576   60421 pod_ready.go:93] pod "etcd-no-preload-232338" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:06.002597   60421 pod_ready.go:82] duration metric: took 5.244238ms for pod "etcd-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.002607   60421 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.007407   60421 pod_ready.go:93] pod "kube-apiserver-no-preload-232338" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:06.007435   60421 pod_ready.go:82] duration metric: took 4.820838ms for pod "kube-apiserver-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.007449   60421 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.012239   60421 pod_ready.go:93] pod "kube-controller-manager-no-preload-232338" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:06.012263   60421 pod_ready.go:82] duration metric: took 4.806874ms for pod "kube-controller-manager-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.012273   60421 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-m5hq8" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.017087   60421 pod_ready.go:93] pod "kube-proxy-m5hq8" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:06.017111   60421 pod_ready.go:82] duration metric: took 4.830348ms for pod "kube-proxy-m5hq8" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.017124   60421 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.393947   60421 pod_ready.go:93] pod "kube-scheduler-no-preload-232338" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:06.393978   60421 pod_ready.go:82] duration metric: took 376.845934ms for pod "kube-scheduler-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.393989   60421 pod_ready.go:39] duration metric: took 7.444356073s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:05:06.394008   60421 api_server.go:52] waiting for apiserver process to appear ...
	I1216 21:05:06.394074   60421 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:05:06.410287   60421 api_server.go:72] duration metric: took 7.774519412s to wait for apiserver process to appear ...
	I1216 21:05:06.410327   60421 api_server.go:88] waiting for apiserver healthz status ...
	I1216 21:05:06.410363   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:05:06.415344   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 200:
	ok
	I1216 21:05:06.416302   60421 api_server.go:141] control plane version: v1.32.0
	I1216 21:05:06.416324   60421 api_server.go:131] duration metric: took 5.989768ms to wait for apiserver health ...
	I1216 21:05:06.416333   60421 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 21:05:06.598174   60421 system_pods.go:59] 9 kube-system pods found
	I1216 21:05:06.598205   60421 system_pods.go:61] "coredns-668d6bf9bc-4wwvd" [1c63ab10-dfdd-4aca-b39f-bc9b0e028e5e] Running
	I1216 21:05:06.598210   60421 system_pods.go:61] "coredns-668d6bf9bc-c4qfj" [b9bf3125-1e6d-4794-a2e6-2ff7ed5132b1] Running
	I1216 21:05:06.598214   60421 system_pods.go:61] "etcd-no-preload-232338" [5318f756-4c64-46be-b71b-94d53f48f0e9] Running
	I1216 21:05:06.598218   60421 system_pods.go:61] "kube-apiserver-no-preload-232338" [8d8fa68c-80ab-4747-a2ce-eeaff8847c29] Running
	I1216 21:05:06.598222   60421 system_pods.go:61] "kube-controller-manager-no-preload-232338" [8626806c-cd3f-488c-95c3-4b909878c1e4] Running
	I1216 21:05:06.598224   60421 system_pods.go:61] "kube-proxy-m5hq8" [ca0d357a-dda2-4508-a954-5c67eaf5b8ac] Running
	I1216 21:05:06.598229   60421 system_pods.go:61] "kube-scheduler-no-preload-232338" [8944107e-9e5c-474b-a0c1-9461e797a131] Running
	I1216 21:05:06.598236   60421 system_pods.go:61] "metrics-server-f79f97bbb-l7dcr" [fabafb40-1cb8-427b-88a6-37eeb6fd5b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:05:06.598240   60421 system_pods.go:61] "storage-provisioner" [3b742666-dfd4-4c9b-95a9-25367ec2a718] Running
	I1216 21:05:06.598248   60421 system_pods.go:74] duration metric: took 181.908567ms to wait for pod list to return data ...
	I1216 21:05:06.598255   60421 default_sa.go:34] waiting for default service account to be created ...
	I1216 21:05:06.794774   60421 default_sa.go:45] found service account: "default"
	I1216 21:05:06.794805   60421 default_sa.go:55] duration metric: took 196.542698ms for default service account to be created ...
	I1216 21:05:06.794823   60421 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 21:05:06.998297   60421 system_pods.go:86] 9 kube-system pods found
	I1216 21:05:06.998332   60421 system_pods.go:89] "coredns-668d6bf9bc-4wwvd" [1c63ab10-dfdd-4aca-b39f-bc9b0e028e5e] Running
	I1216 21:05:06.998341   60421 system_pods.go:89] "coredns-668d6bf9bc-c4qfj" [b9bf3125-1e6d-4794-a2e6-2ff7ed5132b1] Running
	I1216 21:05:06.998348   60421 system_pods.go:89] "etcd-no-preload-232338" [5318f756-4c64-46be-b71b-94d53f48f0e9] Running
	I1216 21:05:06.998354   60421 system_pods.go:89] "kube-apiserver-no-preload-232338" [8d8fa68c-80ab-4747-a2ce-eeaff8847c29] Running
	I1216 21:05:06.998359   60421 system_pods.go:89] "kube-controller-manager-no-preload-232338" [8626806c-cd3f-488c-95c3-4b909878c1e4] Running
	I1216 21:05:06.998364   60421 system_pods.go:89] "kube-proxy-m5hq8" [ca0d357a-dda2-4508-a954-5c67eaf5b8ac] Running
	I1216 21:05:06.998369   60421 system_pods.go:89] "kube-scheduler-no-preload-232338" [8944107e-9e5c-474b-a0c1-9461e797a131] Running
	I1216 21:05:06.998378   60421 system_pods.go:89] "metrics-server-f79f97bbb-l7dcr" [fabafb40-1cb8-427b-88a6-37eeb6fd5b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:05:06.998385   60421 system_pods.go:89] "storage-provisioner" [3b742666-dfd4-4c9b-95a9-25367ec2a718] Running
	I1216 21:05:06.998397   60421 system_pods.go:126] duration metric: took 203.564807ms to wait for k8s-apps to be running ...
	I1216 21:05:06.998407   60421 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 21:05:06.998457   60421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:05:07.014979   60421 system_svc.go:56] duration metric: took 16.561363ms WaitForService to wait for kubelet
	I1216 21:05:07.015013   60421 kubeadm.go:582] duration metric: took 8.379260538s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 21:05:07.015029   60421 node_conditions.go:102] verifying NodePressure condition ...
	I1216 21:05:07.195470   60421 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 21:05:07.195504   60421 node_conditions.go:123] node cpu capacity is 2
	I1216 21:05:07.195516   60421 node_conditions.go:105] duration metric: took 180.480949ms to run NodePressure ...
	I1216 21:05:07.195530   60421 start.go:241] waiting for startup goroutines ...
	I1216 21:05:07.195541   60421 start.go:246] waiting for cluster config update ...
	I1216 21:05:07.195554   60421 start.go:255] writing updated cluster config ...
	I1216 21:05:07.195857   60421 ssh_runner.go:195] Run: rm -f paused
	I1216 21:05:07.244442   60421 start.go:600] kubectl: 1.32.0, cluster: 1.32.0 (minor skew: 0)
	I1216 21:05:07.246788   60421 out.go:177] * Done! kubectl is now configured to use "no-preload-232338" cluster and "default" namespace by default
	I1216 21:05:06.784032   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:05:06.784224   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:05:13.066274   60215 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.635155592s)
	I1216 21:05:13.066379   60215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:05:13.096145   60215 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 21:05:13.109211   60215 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:05:13.125828   60215 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:05:13.125859   60215 kubeadm.go:157] found existing configuration files:
	
	I1216 21:05:13.125914   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 21:05:13.146982   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:05:13.147053   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:05:13.159382   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 21:05:13.176492   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:05:13.176572   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:05:13.190933   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 21:05:13.213230   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:05:13.213301   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:05:13.224631   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 21:05:13.234914   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:05:13.234975   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:05:13.245513   60215 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 21:05:13.300399   60215 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I1216 21:05:13.300491   60215 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 21:05:13.424114   60215 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 21:05:13.424252   60215 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 21:05:13.424372   60215 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 21:05:13.434507   60215 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 21:05:13.436710   60215 out.go:235]   - Generating certificates and keys ...
	I1216 21:05:13.436825   60215 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 21:05:13.436985   60215 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 21:05:13.437127   60215 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 21:05:13.437215   60215 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 21:05:13.437317   60215 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 21:05:13.437404   60215 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 21:05:13.437822   60215 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 21:05:13.438183   60215 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 21:05:13.438724   60215 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 21:05:13.439096   60215 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 21:05:13.439334   60215 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 21:05:13.439399   60215 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 21:05:13.528853   60215 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 21:05:13.700795   60215 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 21:05:13.890142   60215 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 21:05:14.166151   60215 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 21:05:14.310513   60215 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 21:05:14.311121   60215 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 21:05:14.317114   60215 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 21:05:14.319080   60215 out.go:235]   - Booting up control plane ...
	I1216 21:05:14.319218   60215 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 21:05:14.319332   60215 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 21:05:14.319518   60215 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 21:05:14.340394   60215 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 21:05:14.348443   60215 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 21:05:14.348533   60215 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 21:05:14.493244   60215 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 21:05:14.493456   60215 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 21:05:14.995210   60215 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.042805ms
	I1216 21:05:14.995325   60215 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1216 21:05:20.000911   60215 kubeadm.go:310] [api-check] The API server is healthy after 5.002773967s
	I1216 21:05:20.019851   60215 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 21:05:20.037375   60215 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 21:05:20.074003   60215 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 21:05:20.074237   60215 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-606219 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 21:05:20.087136   60215 kubeadm.go:310] [bootstrap-token] Using token: wev02f.lvhctqt9pq1agi1c
	I1216 21:05:20.088742   60215 out.go:235]   - Configuring RBAC rules ...
	I1216 21:05:20.088893   60215 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 21:05:20.094114   60215 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 21:05:20.101979   60215 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 21:05:20.105419   60215 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 21:05:20.112443   60215 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 21:05:20.116045   60215 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 21:05:20.406790   60215 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 21:05:20.844101   60215 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1216 21:05:21.414298   60215 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1216 21:05:21.414397   60215 kubeadm.go:310] 
	I1216 21:05:21.414488   60215 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1216 21:05:21.414504   60215 kubeadm.go:310] 
	I1216 21:05:21.414636   60215 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1216 21:05:21.414655   60215 kubeadm.go:310] 
	I1216 21:05:21.414694   60215 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1216 21:05:21.414796   60215 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 21:05:21.414866   60215 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 21:05:21.414877   60215 kubeadm.go:310] 
	I1216 21:05:21.414978   60215 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1216 21:05:21.415004   60215 kubeadm.go:310] 
	I1216 21:05:21.415071   60215 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 21:05:21.415080   60215 kubeadm.go:310] 
	I1216 21:05:21.415147   60215 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1216 21:05:21.415314   60215 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 21:05:21.415424   60215 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 21:05:21.415444   60215 kubeadm.go:310] 
	I1216 21:05:21.415568   60215 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 21:05:21.415674   60215 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1216 21:05:21.415690   60215 kubeadm.go:310] 
	I1216 21:05:21.415837   60215 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wev02f.lvhctqt9pq1agi1c \
	I1216 21:05:21.415982   60215 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e03b60b144334bf383a3d22daeca854a6b4004373f1847ba3afcb85a998b5735 \
	I1216 21:05:21.416023   60215 kubeadm.go:310] 	--control-plane 
	I1216 21:05:21.416033   60215 kubeadm.go:310] 
	I1216 21:05:21.416152   60215 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1216 21:05:21.416165   60215 kubeadm.go:310] 
	I1216 21:05:21.416295   60215 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wev02f.lvhctqt9pq1agi1c \
	I1216 21:05:21.416452   60215 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e03b60b144334bf383a3d22daeca854a6b4004373f1847ba3afcb85a998b5735 
	I1216 21:05:21.417157   60215 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 21:05:21.417251   60215 cni.go:84] Creating CNI manager for ""
	I1216 21:05:21.417265   60215 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:05:21.418899   60215 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 21:05:21.420240   60215 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 21:05:21.438639   60215 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 21:05:21.470443   60215 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 21:05:21.470525   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:21.470552   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-606219 minikube.k8s.io/updated_at=2024_12_16T21_05_21_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=74e51ab701402ddc00f8ba70f2a2775c7dcd6477 minikube.k8s.io/name=embed-certs-606219 minikube.k8s.io/primary=true
	I1216 21:05:21.721162   60215 ops.go:34] apiserver oom_adj: -16
	I1216 21:05:21.721292   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:22.221634   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:22.722431   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:23.221436   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:23.721948   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:24.222009   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:24.722203   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:24.835684   60215 kubeadm.go:1113] duration metric: took 3.36522517s to wait for elevateKubeSystemPrivileges
	I1216 21:05:24.835729   60215 kubeadm.go:394] duration metric: took 5m0.316036708s to StartCluster
	I1216 21:05:24.835751   60215 settings.go:142] acquiring lock: {Name:mke62e1d1fa6bfae09410847a3fc6f95d0bbbd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:05:24.835847   60215 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 21:05:24.838279   60215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/kubeconfig: {Name:mk67073c6dc9abd712825d4490d6430745897f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:05:24.838580   60215 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.151 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 21:05:24.838625   60215 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 21:05:24.838747   60215 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-606219"
	I1216 21:05:24.838768   60215 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-606219"
	W1216 21:05:24.838789   60215 addons.go:243] addon storage-provisioner should already be in state true
	I1216 21:05:24.838816   60215 config.go:182] Loaded profile config "embed-certs-606219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 21:05:24.838825   60215 addons.go:69] Setting default-storageclass=true in profile "embed-certs-606219"
	I1216 21:05:24.838832   60215 addons.go:69] Setting metrics-server=true in profile "embed-certs-606219"
	I1216 21:05:24.838846   60215 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-606219"
	I1216 21:05:24.838822   60215 host.go:66] Checking if "embed-certs-606219" exists ...
	I1216 21:05:24.838848   60215 addons.go:234] Setting addon metrics-server=true in "embed-certs-606219"
	W1216 21:05:24.838945   60215 addons.go:243] addon metrics-server should already be in state true
	I1216 21:05:24.838965   60215 host.go:66] Checking if "embed-certs-606219" exists ...
	I1216 21:05:24.839285   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:05:24.839292   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:05:24.839331   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:05:24.839364   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:05:24.839415   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:05:24.839496   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:05:24.843833   60215 out.go:177] * Verifying Kubernetes components...
	I1216 21:05:24.845341   60215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 21:05:24.857648   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39513
	I1216 21:05:24.858457   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:05:24.859021   60215 main.go:141] libmachine: Using API Version  1
	I1216 21:05:24.859037   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:05:24.861356   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36663
	I1216 21:05:24.861406   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44685
	I1216 21:05:24.861357   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:05:24.861844   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:05:24.862150   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:05:24.862188   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:05:24.862315   60215 main.go:141] libmachine: Using API Version  1
	I1216 21:05:24.862334   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:05:24.862334   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:05:24.862661   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:05:24.862876   60215 main.go:141] libmachine: Using API Version  1
	I1216 21:05:24.862894   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:05:24.863171   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:05:24.863200   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:05:24.863634   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:05:24.863964   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetState
	I1216 21:05:24.867371   60215 addons.go:234] Setting addon default-storageclass=true in "embed-certs-606219"
	W1216 21:05:24.867392   60215 addons.go:243] addon default-storageclass should already be in state true
	I1216 21:05:24.867419   60215 host.go:66] Checking if "embed-certs-606219" exists ...
	I1216 21:05:24.867758   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:05:24.867801   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:05:24.884243   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35999
	I1216 21:05:24.884680   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:05:24.885282   60215 main.go:141] libmachine: Using API Version  1
	I1216 21:05:24.885304   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:05:24.885380   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36799
	I1216 21:05:24.885657   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:05:24.885730   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:05:24.885934   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetState
	I1216 21:05:24.886191   60215 main.go:141] libmachine: Using API Version  1
	I1216 21:05:24.886202   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:05:24.886473   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:05:24.886831   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:05:24.886853   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:05:24.887935   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:05:24.890092   60215 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1216 21:05:24.891395   60215 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1216 21:05:24.891413   60215 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1216 21:05:24.891441   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:05:24.894367   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46739
	I1216 21:05:24.894926   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:05:24.895551   60215 main.go:141] libmachine: Using API Version  1
	I1216 21:05:24.895570   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:05:24.895832   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:05:24.896148   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:05:24.896382   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetState
	I1216 21:05:24.896501   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:05:24.896523   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:05:24.897136   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:05:24.897327   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:05:24.897507   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:05:24.897673   60215 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 21:05:24.898101   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:05:24.900061   60215 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 21:05:24.901390   60215 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 21:05:24.901412   60215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 21:05:24.901432   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:05:24.904063   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:05:24.904403   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:05:24.904421   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:05:24.904617   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:05:24.904828   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:05:24.904969   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:05:24.905117   60215 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 21:05:24.907518   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32915
	I1216 21:05:24.907890   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:05:24.908349   60215 main.go:141] libmachine: Using API Version  1
	I1216 21:05:24.908362   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:05:24.908615   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:05:24.908793   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetState
	I1216 21:05:24.910349   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:05:24.910557   60215 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 21:05:24.910590   60215 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 21:05:24.910623   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:05:24.913163   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:05:24.913546   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:05:24.913628   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:05:24.913971   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:05:24.914156   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:05:24.914402   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:05:24.914562   60215 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 21:05:25.054773   60215 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 21:05:25.077692   60215 node_ready.go:35] waiting up to 6m0s for node "embed-certs-606219" to be "Ready" ...
	I1216 21:05:25.085592   60215 node_ready.go:49] node "embed-certs-606219" has status "Ready":"True"
	I1216 21:05:25.085618   60215 node_ready.go:38] duration metric: took 7.893359ms for node "embed-certs-606219" to be "Ready" ...
	I1216 21:05:25.085630   60215 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:05:25.092073   60215 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:25.160890   60215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 21:05:25.171950   60215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 21:05:25.174517   60215 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1216 21:05:25.174540   60215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1216 21:05:25.201386   60215 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1216 21:05:25.201415   60215 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1216 21:05:25.279568   60215 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 21:05:25.279599   60215 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1216 21:05:25.316528   60215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 21:05:25.944495   60215 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:25.944521   60215 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:25.944529   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Close
	I1216 21:05:25.944533   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Close
	I1216 21:05:25.944816   60215 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:25.944835   60215 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:25.944845   60215 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:25.944855   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Close
	I1216 21:05:25.944855   60215 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:25.944869   60215 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:25.944876   60215 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:25.944888   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Close
	I1216 21:05:25.944817   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Closing plugin on server side
	I1216 21:05:25.945069   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Closing plugin on server side
	I1216 21:05:25.945131   60215 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:25.945147   60215 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:25.945168   60215 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:25.945173   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Closing plugin on server side
	I1216 21:05:25.945218   60215 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:25.961427   60215 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:25.961449   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Close
	I1216 21:05:25.961729   60215 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:25.961743   60215 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:26.745600   60215 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.429029698s)
	I1216 21:05:26.745665   60215 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:26.745678   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Close
	I1216 21:05:26.746097   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Closing plugin on server side
	I1216 21:05:26.746115   60215 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:26.746128   60215 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:26.746142   60215 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:26.746151   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Close
	I1216 21:05:26.746429   60215 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:26.746446   60215 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:26.746457   60215 addons.go:475] Verifying addon metrics-server=true in "embed-certs-606219"
	I1216 21:05:26.746480   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Closing plugin on server side
	I1216 21:05:26.748859   60215 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1216 21:05:26.785021   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:05:26.785309   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:05:26.750502   60215 addons.go:510] duration metric: took 1.911885721s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1216 21:05:27.124629   60215 pod_ready.go:103] pod "etcd-embed-certs-606219" in "kube-system" namespace has status "Ready":"False"
	I1216 21:05:28.100607   60215 pod_ready.go:93] pod "etcd-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:28.100642   60215 pod_ready.go:82] duration metric: took 3.008540123s for pod "etcd-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:28.100654   60215 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:28.107620   60215 pod_ready.go:93] pod "kube-apiserver-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:28.107649   60215 pod_ready.go:82] duration metric: took 6.986126ms for pod "kube-apiserver-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:28.107661   60215 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:30.114012   60215 pod_ready.go:103] pod "kube-controller-manager-embed-certs-606219" in "kube-system" namespace has status "Ready":"False"
	I1216 21:05:31.116704   60215 pod_ready.go:93] pod "kube-controller-manager-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:31.116738   60215 pod_ready.go:82] duration metric: took 3.009069732s for pod "kube-controller-manager-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:31.116752   60215 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:31.122043   60215 pod_ready.go:93] pod "kube-scheduler-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:31.122079   60215 pod_ready.go:82] duration metric: took 5.318248ms for pod "kube-scheduler-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:31.122089   60215 pod_ready.go:39] duration metric: took 6.036446164s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:05:31.122107   60215 api_server.go:52] waiting for apiserver process to appear ...
	I1216 21:05:31.122167   60215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:05:31.140854   60215 api_server.go:72] duration metric: took 6.302233923s to wait for apiserver process to appear ...
	I1216 21:05:31.140887   60215 api_server.go:88] waiting for apiserver healthz status ...
	I1216 21:05:31.140910   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:05:31.146080   60215 api_server.go:279] https://192.168.61.151:8443/healthz returned 200:
	ok
	I1216 21:05:31.147076   60215 api_server.go:141] control plane version: v1.32.0
	I1216 21:05:31.147107   60215 api_server.go:131] duration metric: took 6.2056ms to wait for apiserver health ...
	I1216 21:05:31.147115   60215 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 21:05:31.152598   60215 system_pods.go:59] 9 kube-system pods found
	I1216 21:05:31.152627   60215 system_pods.go:61] "coredns-668d6bf9bc-5c74p" [ef8e73b6-150f-47cc-9df9-dcf983e5bd6e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:05:31.152634   60215 system_pods.go:61] "coredns-668d6bf9bc-xhdlz" [c1b5b585-f005-4885-9809-60f60e03bf04] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:05:31.152640   60215 system_pods.go:61] "etcd-embed-certs-606219" [f5595ee4-23f3-4227-8e25-8679fd2dc722] Running
	I1216 21:05:31.152643   60215 system_pods.go:61] "kube-apiserver-embed-certs-606219" [be11ba17-ecee-47c1-a4bd-329e0e705369] Running
	I1216 21:05:31.152647   60215 system_pods.go:61] "kube-controller-manager-embed-certs-606219" [21210597-d4d5-4cab-9a24-2d9f702f682d] Running
	I1216 21:05:31.152652   60215 system_pods.go:61] "kube-proxy-677x9" [37810520-4f02-46c4-8eeb-6dc70c859e3e] Running
	I1216 21:05:31.152655   60215 system_pods.go:61] "kube-scheduler-embed-certs-606219" [5a39f42d-b727-4acd-bd39-ae1c56a5b725] Running
	I1216 21:05:31.152659   60215 system_pods.go:61] "metrics-server-f79f97bbb-6fxnl" [828f2925-402c-4f49-89e1-354e082c0de4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:05:31.152662   60215 system_pods.go:61] "storage-provisioner" [6437bd61-690b-498d-b35c-e2ef4eb5be97] Running
	I1216 21:05:31.152669   60215 system_pods.go:74] duration metric: took 5.548798ms to wait for pod list to return data ...
	I1216 21:05:31.152675   60215 default_sa.go:34] waiting for default service account to be created ...
	I1216 21:05:31.155444   60215 default_sa.go:45] found service account: "default"
	I1216 21:05:31.155469   60215 default_sa.go:55] duration metric: took 2.788897ms for default service account to be created ...
	I1216 21:05:31.155477   60215 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 21:05:31.160520   60215 system_pods.go:86] 9 kube-system pods found
	I1216 21:05:31.160548   60215 system_pods.go:89] "coredns-668d6bf9bc-5c74p" [ef8e73b6-150f-47cc-9df9-dcf983e5bd6e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:05:31.160555   60215 system_pods.go:89] "coredns-668d6bf9bc-xhdlz" [c1b5b585-f005-4885-9809-60f60e03bf04] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:05:31.160561   60215 system_pods.go:89] "etcd-embed-certs-606219" [f5595ee4-23f3-4227-8e25-8679fd2dc722] Running
	I1216 21:05:31.160565   60215 system_pods.go:89] "kube-apiserver-embed-certs-606219" [be11ba17-ecee-47c1-a4bd-329e0e705369] Running
	I1216 21:05:31.160569   60215 system_pods.go:89] "kube-controller-manager-embed-certs-606219" [21210597-d4d5-4cab-9a24-2d9f702f682d] Running
	I1216 21:05:31.160573   60215 system_pods.go:89] "kube-proxy-677x9" [37810520-4f02-46c4-8eeb-6dc70c859e3e] Running
	I1216 21:05:31.160576   60215 system_pods.go:89] "kube-scheduler-embed-certs-606219" [5a39f42d-b727-4acd-bd39-ae1c56a5b725] Running
	I1216 21:05:31.160580   60215 system_pods.go:89] "metrics-server-f79f97bbb-6fxnl" [828f2925-402c-4f49-89e1-354e082c0de4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:05:31.160584   60215 system_pods.go:89] "storage-provisioner" [6437bd61-690b-498d-b35c-e2ef4eb5be97] Running
	I1216 21:05:31.160591   60215 system_pods.go:126] duration metric: took 5.109359ms to wait for k8s-apps to be running ...
	I1216 21:05:31.160597   60215 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 21:05:31.160637   60215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:05:31.177182   60215 system_svc.go:56] duration metric: took 16.575484ms WaitForService to wait for kubelet
	I1216 21:05:31.177216   60215 kubeadm.go:582] duration metric: took 6.33860089s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 21:05:31.177239   60215 node_conditions.go:102] verifying NodePressure condition ...
	I1216 21:05:31.180614   60215 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 21:05:31.180635   60215 node_conditions.go:123] node cpu capacity is 2
	I1216 21:05:31.180645   60215 node_conditions.go:105] duration metric: took 3.400617ms to run NodePressure ...
	I1216 21:05:31.180656   60215 start.go:241] waiting for startup goroutines ...
	I1216 21:05:31.180667   60215 start.go:246] waiting for cluster config update ...
	I1216 21:05:31.180684   60215 start.go:255] writing updated cluster config ...
	I1216 21:05:31.180960   60215 ssh_runner.go:195] Run: rm -f paused
	I1216 21:05:31.232404   60215 start.go:600] kubectl: 1.32.0, cluster: 1.32.0 (minor skew: 0)
	I1216 21:05:31.234366   60215 out.go:177] * Done! kubectl is now configured to use "embed-certs-606219" cluster and "default" namespace by default
	I1216 21:06:06.787417   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:06:06.787673   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:06:06.787700   60933 kubeadm.go:310] 
	I1216 21:06:06.787779   60933 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1216 21:06:06.787849   60933 kubeadm.go:310] 		timed out waiting for the condition
	I1216 21:06:06.787864   60933 kubeadm.go:310] 
	I1216 21:06:06.787894   60933 kubeadm.go:310] 	This error is likely caused by:
	I1216 21:06:06.787944   60933 kubeadm.go:310] 		- The kubelet is not running
	I1216 21:06:06.788115   60933 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 21:06:06.788131   60933 kubeadm.go:310] 
	I1216 21:06:06.788238   60933 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 21:06:06.788270   60933 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1216 21:06:06.788328   60933 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1216 21:06:06.788346   60933 kubeadm.go:310] 
	I1216 21:06:06.788492   60933 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1216 21:06:06.788568   60933 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1216 21:06:06.788575   60933 kubeadm.go:310] 
	I1216 21:06:06.788706   60933 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1216 21:06:06.788914   60933 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1216 21:06:06.789052   60933 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1216 21:06:06.789150   60933 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1216 21:06:06.789160   60933 kubeadm.go:310] 
	I1216 21:06:06.789970   60933 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 21:06:06.790084   60933 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1216 21:06:06.790222   60933 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1216 21:06:06.790376   60933 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 21:06:06.790430   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 21:06:07.272336   60933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:06:07.288881   60933 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:06:07.303411   60933 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:06:07.303437   60933 kubeadm.go:157] found existing configuration files:
	
	I1216 21:06:07.303486   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 21:06:07.314605   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:06:07.314675   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:06:07.326523   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 21:06:07.336506   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:06:07.336587   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:06:07.347505   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 21:06:07.357743   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:06:07.357799   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:06:07.368251   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 21:06:07.378296   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:06:07.378366   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:06:07.390625   60933 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 21:06:07.461800   60933 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1216 21:06:07.461911   60933 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 21:06:07.607467   60933 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 21:06:07.607664   60933 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 21:06:07.607821   60933 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1216 21:06:07.821429   60933 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 21:06:07.823617   60933 out.go:235]   - Generating certificates and keys ...
	I1216 21:06:07.823728   60933 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 21:06:07.823826   60933 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 21:06:07.823970   60933 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 21:06:07.824066   60933 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 21:06:07.824191   60933 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 21:06:07.824281   60933 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 21:06:07.824374   60933 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 21:06:07.824452   60933 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 21:06:07.824529   60933 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 21:06:07.824634   60933 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 21:06:07.824728   60933 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 21:06:07.824826   60933 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 21:06:08.070481   60933 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 21:06:08.416182   60933 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 21:06:08.472848   60933 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 21:06:08.528700   60933 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 21:06:08.551528   60933 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 21:06:08.552215   60933 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 21:06:08.552299   60933 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 21:06:08.702187   60933 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 21:06:08.704170   60933 out.go:235]   - Booting up control plane ...
	I1216 21:06:08.704286   60933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 21:06:08.721205   60933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 21:06:08.722619   60933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 21:06:08.724289   60933 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 21:06:08.726457   60933 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1216 21:06:48.729045   60933 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1216 21:06:48.729713   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:06:48.730028   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:06:53.730648   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:06:53.730870   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:07:03.731670   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:07:03.731904   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:07:23.733276   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:07:23.733489   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:08:03.734439   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:08:03.734730   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:08:03.734768   60933 kubeadm.go:310] 
	I1216 21:08:03.734831   60933 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1216 21:08:03.734902   60933 kubeadm.go:310] 		timed out waiting for the condition
	I1216 21:08:03.734917   60933 kubeadm.go:310] 
	I1216 21:08:03.734966   60933 kubeadm.go:310] 	This error is likely caused by:
	I1216 21:08:03.735003   60933 kubeadm.go:310] 		- The kubelet is not running
	I1216 21:08:03.735094   60933 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 21:08:03.735104   60933 kubeadm.go:310] 
	I1216 21:08:03.735260   60933 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 21:08:03.735325   60933 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1216 21:08:03.735353   60933 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1216 21:08:03.735359   60933 kubeadm.go:310] 
	I1216 21:08:03.735486   60933 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1216 21:08:03.735604   60933 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1216 21:08:03.735614   60933 kubeadm.go:310] 
	I1216 21:08:03.735757   60933 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1216 21:08:03.735880   60933 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1216 21:08:03.735986   60933 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1216 21:08:03.736096   60933 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1216 21:08:03.736107   60933 kubeadm.go:310] 
	I1216 21:08:03.736944   60933 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 21:08:03.737145   60933 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1216 21:08:03.737211   60933 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1216 21:08:03.737287   60933 kubeadm.go:394] duration metric: took 7m57.891196073s to StartCluster
	I1216 21:08:03.737346   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:08:03.737417   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:08:03.789377   60933 cri.go:89] found id: ""
	I1216 21:08:03.789412   60933 logs.go:282] 0 containers: []
	W1216 21:08:03.789421   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:08:03.789426   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:08:03.789477   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:08:03.831122   60933 cri.go:89] found id: ""
	I1216 21:08:03.831150   60933 logs.go:282] 0 containers: []
	W1216 21:08:03.831161   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:08:03.831167   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:08:03.831236   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:08:03.870598   60933 cri.go:89] found id: ""
	I1216 21:08:03.870625   60933 logs.go:282] 0 containers: []
	W1216 21:08:03.870634   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:08:03.870640   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:08:03.870695   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:08:03.909060   60933 cri.go:89] found id: ""
	I1216 21:08:03.909095   60933 logs.go:282] 0 containers: []
	W1216 21:08:03.909103   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:08:03.909109   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:08:03.909163   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:08:03.946925   60933 cri.go:89] found id: ""
	I1216 21:08:03.946954   60933 logs.go:282] 0 containers: []
	W1216 21:08:03.946962   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:08:03.946968   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:08:03.947038   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:08:03.985596   60933 cri.go:89] found id: ""
	I1216 21:08:03.985629   60933 logs.go:282] 0 containers: []
	W1216 21:08:03.985650   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:08:03.985670   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:08:03.985736   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:08:04.022504   60933 cri.go:89] found id: ""
	I1216 21:08:04.022530   60933 logs.go:282] 0 containers: []
	W1216 21:08:04.022538   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:08:04.022545   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:08:04.022608   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:08:04.075636   60933 cri.go:89] found id: ""
	I1216 21:08:04.075667   60933 logs.go:282] 0 containers: []
	W1216 21:08:04.075677   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:08:04.075688   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:08:04.075707   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:08:04.180622   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:08:04.180653   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:08:04.180671   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:08:04.308091   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:08:04.308146   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:08:04.353240   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:08:04.353294   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:08:04.407919   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:08:04.407955   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1216 21:08:04.423583   60933 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1216 21:08:04.423644   60933 out.go:270] * 
	W1216 21:08:04.423727   60933 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 21:08:04.423749   60933 out.go:270] * 
	W1216 21:08:04.424576   60933 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 21:08:04.428361   60933 out.go:201] 
	W1216 21:08:04.429839   60933 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 21:08:04.429919   60933 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 21:08:04.429958   60933 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 21:08:04.431619   60933 out.go:201] 
	
	
	==> CRI-O <==
	Dec 16 21:14:09 no-preload-232338 crio[723]: time="2024-12-16 21:14:09.308126730Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383649308097800,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100999,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1beef2a1-30cc-4d28-ad32-bcd6a1f3ef02 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:14:09 no-preload-232338 crio[723]: time="2024-12-16 21:14:09.308630069Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d71b613c-aa0e-4d21-adbc-a78f63b629e5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:14:09 no-preload-232338 crio[723]: time="2024-12-16 21:14:09.308683666Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d71b613c-aa0e-4d21-adbc-a78f63b629e5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:14:09 no-preload-232338 crio[723]: time="2024-12-16 21:14:09.308879206Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dcd2618255da99a588a2bbff1366ef3ae7975c5b7427ce4189ef2c5fd444ba69,PodSandboxId:4a027b6736fd003341c3b022ecb72f4f64a4e5defb13a88abc98e21dd788c0bc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734383100726836966,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b742666-dfd4-4c9b-95a9-25367ec2a718,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f93fa31c7526abdf13792a5cfc284dc96b300ca36237c5d3d16c389ba6b4b224,PodSandboxId:65da06c38941b22dfca9fe46390b838efe27c4632ba3e624580b97f346eee2e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383100350440496,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-4wwvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c63ab10-dfdd-4aca-b39f-bc9b0e028e5e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb3f8053812ebdd6ac1c1b3990ea33cff2d03a50d43c7f54310432574636e2a6,PodSandboxId:5d2b63620968f65012abd2f9f158fabb1cc6a4681aab5225078b4706815f5f06,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383100100017809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-c4qfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9
bf3125-1e6d-4794-a2e6-2ff7ed5132b1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ca52a5e130b887ba85db3ae0ebc536eabd241b8b47bd574e2312e53de9ed7e6,PodSandboxId:6bc53584513262f57a230b5ca6bd863547fef49c2bb8a3889b4b264e6e89075d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:
1734383099371151110,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5hq8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0d357a-dda2-4508-a954-5c67eaf5b8ac,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f644bafa71082b2f43c37e4984bcf95201973c749ec322d44cf504a64879cf1d,PodSandboxId:2a9a6364a517696e5951ef6bd5c50ed7cc3fb1a61088318b4a9964e81c900bf8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1734383088510061093,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31d0531328a1e22e77c38d5296534b60,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d80b96bf35cfc3a12f52bfdb0f2e4eae378235fd658763bc0054a669a0e7919a,PodSandboxId:e40606d090d61ee9e28ab4fbfec4316a013f9eb9e3c827afd055c3cfc5929844,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:173438308846219
1813,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5f24463af3d3cd6c412e107e62d9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18cb850ac82ce6321ec8c820d2b187338227813eb490151ac8c15b3c8185fc60,PodSandboxId:45454fd600089289d1da856ebc8e119c0fb670019408f2221c8547d8eb4dc690,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1734383088387050693,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a928546caa71eb5802e4715858850ef,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:385603c4d7165566e9b078308e8ed0ab97e4f8623edef92149ca1315ea5bcecd,PodSandboxId:d0ad047ea69298677e8b3012f5974254f73924ffe29df1b7d429392a5a03a9dc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1734383088345861879,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d61d90d3fc49432c3d4314e8cdc6846,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54a482a8f0d22fa1dbe26bce347ead5e18fdbf256a99bfe5ede3c5c070c44e8c,PodSandboxId:63e38a0cdd4ab48bf512e430486626bc6ba5d8812126d7f48544b696d00fe7c6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1734382800623517087,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a928546caa71eb5802e4715858850ef,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d71b613c-aa0e-4d21-adbc-a78f63b629e5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:14:09 no-preload-232338 crio[723]: time="2024-12-16 21:14:09.335007277Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=33510141-8f1c-45a5-9457-adee63e2fe5a name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 16 21:14:09 no-preload-232338 crio[723]: time="2024-12-16 21:14:09.336613027Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:f8a220f9376eead0e689a29513ef9a98839e4bd1897a0f178b7c51ce5fb1f417,Metadata:&PodSandboxMetadata{Name:metrics-server-f79f97bbb-l7dcr,Uid:fabafb40-1cb8-427b-88a6-37eeb6fd5b77,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1734383100760781190,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-f79f97bbb-l7dcr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fabafb40-1cb8-427b-88a6-37eeb6fd5b77,k8s-app: metrics-server,pod-template-hash: f79f97bbb,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-16T21:05:00.446264267Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4a027b6736fd003341c3b022ecb72f4f64a4e5defb13a88abc98e21dd788c0bc,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:3b742666-dfd4-4c9b-95a9-25367ec2a718,Names
pace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1734383100432172956,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b742666-dfd4-4c9b-95a9-25367ec2a718,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes
\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-12-16T21:05:00.118889769Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:65da06c38941b22dfca9fe46390b838efe27c4632ba3e624580b97f346eee2e6,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-4wwvd,Uid:1c63ab10-dfdd-4aca-b39f-bc9b0e028e5e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1734383099243948199,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-4wwvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c63ab10-dfdd-4aca-b39f-bc9b0e028e5e,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-16T21:04:58.921971861Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5d2b63620968f65012abd2f9f158fabb1cc6a4681aab5225078b4706815f5f06,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-c4qfj,Uid:b9bf3125-1e6d-4794-a2e
6-2ff7ed5132b1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1734383099170923652,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-c4qfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9bf3125-1e6d-4794-a2e6-2ff7ed5132b1,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-16T21:04:58.863463381Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6bc53584513262f57a230b5ca6bd863547fef49c2bb8a3889b4b264e6e89075d,Metadata:&PodSandboxMetadata{Name:kube-proxy-m5hq8,Uid:ca0d357a-dda2-4508-a954-5c67eaf5b8ac,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1734383098968834397,Labels:map[string]string{controller-revision-hash: 64b9dbc74b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-m5hq8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0d357a-dda2-4508-a954-5c67eaf5b8ac,k8s-app: kube-proxy,pod-templat
e-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-16T21:04:58.644012801Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:45454fd600089289d1da856ebc8e119c0fb670019408f2221c8547d8eb4dc690,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-232338,Uid:9a928546caa71eb5802e4715858850ef,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1734383088187531133,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a928546caa71eb5802e4715858850ef,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.240:8443,kubernetes.io/config.hash: 9a928546caa71eb5802e4715858850ef,kubernetes.io/config.seen: 2024-12-16T21:04:47.706126210Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e40606d090d61ee9e28ab4fbfec4316a01
3f9eb9e3c827afd055c3cfc5929844,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-232338,Uid:2e5f24463af3d3cd6c412e107e62d9ac,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1734383088184526515,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5f24463af3d3cd6c412e107e62d9ac,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2e5f24463af3d3cd6c412e107e62d9ac,kubernetes.io/config.seen: 2024-12-16T21:04:47.706129024Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2a9a6364a517696e5951ef6bd5c50ed7cc3fb1a61088318b4a9964e81c900bf8,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-232338,Uid:31d0531328a1e22e77c38d5296534b60,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1734383088178268621,Labels:map[string]string{component: kube-controller-manager,io.ku
bernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31d0531328a1e22e77c38d5296534b60,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 31d0531328a1e22e77c38d5296534b60,kubernetes.io/config.seen: 2024-12-16T21:04:47.706127699Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d0ad047ea69298677e8b3012f5974254f73924ffe29df1b7d429392a5a03a9dc,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-232338,Uid:7d61d90d3fc49432c3d4314e8cdc6846,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1734383088163170887,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d61d90d3fc49432c3d4314e8cdc6846,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.240:2379,k
ubernetes.io/config.hash: 7d61d90d3fc49432c3d4314e8cdc6846,kubernetes.io/config.seen: 2024-12-16T21:04:47.706121255Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:63e38a0cdd4ab48bf512e430486626bc6ba5d8812126d7f48544b696d00fe7c6,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-232338,Uid:9a928546caa71eb5802e4715858850ef,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1734382777905996742,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a928546caa71eb5802e4715858850ef,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.240:8443,kubernetes.io/config.hash: 9a928546caa71eb5802e4715858850ef,kubernetes.io/config.seen: 2024-12-16T20:59:37.423618984Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/intercep
tors.go:74" id=33510141-8f1c-45a5-9457-adee63e2fe5a name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 16 21:14:09 no-preload-232338 crio[723]: time="2024-12-16 21:14:09.337401153Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aafdcad8-08a3-40a4-947a-7a2b5a59c0ab name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:14:09 no-preload-232338 crio[723]: time="2024-12-16 21:14:09.337476385Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aafdcad8-08a3-40a4-947a-7a2b5a59c0ab name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:14:09 no-preload-232338 crio[723]: time="2024-12-16 21:14:09.337932465Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dcd2618255da99a588a2bbff1366ef3ae7975c5b7427ce4189ef2c5fd444ba69,PodSandboxId:4a027b6736fd003341c3b022ecb72f4f64a4e5defb13a88abc98e21dd788c0bc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734383100726836966,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b742666-dfd4-4c9b-95a9-25367ec2a718,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f93fa31c7526abdf13792a5cfc284dc96b300ca36237c5d3d16c389ba6b4b224,PodSandboxId:65da06c38941b22dfca9fe46390b838efe27c4632ba3e624580b97f346eee2e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383100350440496,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-4wwvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c63ab10-dfdd-4aca-b39f-bc9b0e028e5e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb3f8053812ebdd6ac1c1b3990ea33cff2d03a50d43c7f54310432574636e2a6,PodSandboxId:5d2b63620968f65012abd2f9f158fabb1cc6a4681aab5225078b4706815f5f06,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383100100017809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-c4qfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9
bf3125-1e6d-4794-a2e6-2ff7ed5132b1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ca52a5e130b887ba85db3ae0ebc536eabd241b8b47bd574e2312e53de9ed7e6,PodSandboxId:6bc53584513262f57a230b5ca6bd863547fef49c2bb8a3889b4b264e6e89075d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:
1734383099371151110,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5hq8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0d357a-dda2-4508-a954-5c67eaf5b8ac,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f644bafa71082b2f43c37e4984bcf95201973c749ec322d44cf504a64879cf1d,PodSandboxId:2a9a6364a517696e5951ef6bd5c50ed7cc3fb1a61088318b4a9964e81c900bf8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1734383088510061093,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31d0531328a1e22e77c38d5296534b60,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d80b96bf35cfc3a12f52bfdb0f2e4eae378235fd658763bc0054a669a0e7919a,PodSandboxId:e40606d090d61ee9e28ab4fbfec4316a013f9eb9e3c827afd055c3cfc5929844,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:173438308846219
1813,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5f24463af3d3cd6c412e107e62d9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18cb850ac82ce6321ec8c820d2b187338227813eb490151ac8c15b3c8185fc60,PodSandboxId:45454fd600089289d1da856ebc8e119c0fb670019408f2221c8547d8eb4dc690,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1734383088387050693,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a928546caa71eb5802e4715858850ef,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:385603c4d7165566e9b078308e8ed0ab97e4f8623edef92149ca1315ea5bcecd,PodSandboxId:d0ad047ea69298677e8b3012f5974254f73924ffe29df1b7d429392a5a03a9dc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1734383088345861879,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d61d90d3fc49432c3d4314e8cdc6846,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54a482a8f0d22fa1dbe26bce347ead5e18fdbf256a99bfe5ede3c5c070c44e8c,PodSandboxId:63e38a0cdd4ab48bf512e430486626bc6ba5d8812126d7f48544b696d00fe7c6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1734382800623517087,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a928546caa71eb5802e4715858850ef,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aafdcad8-08a3-40a4-947a-7a2b5a59c0ab name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:14:09 no-preload-232338 crio[723]: time="2024-12-16 21:14:09.343682193Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=e7ea5f97-0849-44c5-a84a-6636e8adbae0 name=/runtime.v1.RuntimeService/Status
	Dec 16 21:14:09 no-preload-232338 crio[723]: time="2024-12-16 21:14:09.343749087Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=e7ea5f97-0849-44c5-a84a-6636e8adbae0 name=/runtime.v1.RuntimeService/Status
	Dec 16 21:14:09 no-preload-232338 crio[723]: time="2024-12-16 21:14:09.354726250Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=13b8f371-b48c-4c38-a57c-760a2aafd03e name=/runtime.v1.RuntimeService/Version
	Dec 16 21:14:09 no-preload-232338 crio[723]: time="2024-12-16 21:14:09.354817342Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=13b8f371-b48c-4c38-a57c-760a2aafd03e name=/runtime.v1.RuntimeService/Version
	Dec 16 21:14:09 no-preload-232338 crio[723]: time="2024-12-16 21:14:09.355912961Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=35551066-8a66-49d6-ba01-3d785ffc7286 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:14:09 no-preload-232338 crio[723]: time="2024-12-16 21:14:09.356270895Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383649356247120,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100999,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=35551066-8a66-49d6-ba01-3d785ffc7286 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:14:09 no-preload-232338 crio[723]: time="2024-12-16 21:14:09.356782621Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b2eee7d1-3495-47b8-80f4-5f34be4c6241 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:14:09 no-preload-232338 crio[723]: time="2024-12-16 21:14:09.356868915Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b2eee7d1-3495-47b8-80f4-5f34be4c6241 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:14:09 no-preload-232338 crio[723]: time="2024-12-16 21:14:09.357083371Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dcd2618255da99a588a2bbff1366ef3ae7975c5b7427ce4189ef2c5fd444ba69,PodSandboxId:4a027b6736fd003341c3b022ecb72f4f64a4e5defb13a88abc98e21dd788c0bc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734383100726836966,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b742666-dfd4-4c9b-95a9-25367ec2a718,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f93fa31c7526abdf13792a5cfc284dc96b300ca36237c5d3d16c389ba6b4b224,PodSandboxId:65da06c38941b22dfca9fe46390b838efe27c4632ba3e624580b97f346eee2e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383100350440496,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-4wwvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c63ab10-dfdd-4aca-b39f-bc9b0e028e5e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb3f8053812ebdd6ac1c1b3990ea33cff2d03a50d43c7f54310432574636e2a6,PodSandboxId:5d2b63620968f65012abd2f9f158fabb1cc6a4681aab5225078b4706815f5f06,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383100100017809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-c4qfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9
bf3125-1e6d-4794-a2e6-2ff7ed5132b1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ca52a5e130b887ba85db3ae0ebc536eabd241b8b47bd574e2312e53de9ed7e6,PodSandboxId:6bc53584513262f57a230b5ca6bd863547fef49c2bb8a3889b4b264e6e89075d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:
1734383099371151110,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5hq8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0d357a-dda2-4508-a954-5c67eaf5b8ac,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f644bafa71082b2f43c37e4984bcf95201973c749ec322d44cf504a64879cf1d,PodSandboxId:2a9a6364a517696e5951ef6bd5c50ed7cc3fb1a61088318b4a9964e81c900bf8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1734383088510061093,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31d0531328a1e22e77c38d5296534b60,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d80b96bf35cfc3a12f52bfdb0f2e4eae378235fd658763bc0054a669a0e7919a,PodSandboxId:e40606d090d61ee9e28ab4fbfec4316a013f9eb9e3c827afd055c3cfc5929844,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:173438308846219
1813,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5f24463af3d3cd6c412e107e62d9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18cb850ac82ce6321ec8c820d2b187338227813eb490151ac8c15b3c8185fc60,PodSandboxId:45454fd600089289d1da856ebc8e119c0fb670019408f2221c8547d8eb4dc690,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1734383088387050693,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a928546caa71eb5802e4715858850ef,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:385603c4d7165566e9b078308e8ed0ab97e4f8623edef92149ca1315ea5bcecd,PodSandboxId:d0ad047ea69298677e8b3012f5974254f73924ffe29df1b7d429392a5a03a9dc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1734383088345861879,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d61d90d3fc49432c3d4314e8cdc6846,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54a482a8f0d22fa1dbe26bce347ead5e18fdbf256a99bfe5ede3c5c070c44e8c,PodSandboxId:63e38a0cdd4ab48bf512e430486626bc6ba5d8812126d7f48544b696d00fe7c6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1734382800623517087,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a928546caa71eb5802e4715858850ef,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b2eee7d1-3495-47b8-80f4-5f34be4c6241 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:14:09 no-preload-232338 crio[723]: time="2024-12-16 21:14:09.400900384Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=742a8e0b-ef0e-4cee-92c4-aaa72a2beb06 name=/runtime.v1.RuntimeService/Version
	Dec 16 21:14:09 no-preload-232338 crio[723]: time="2024-12-16 21:14:09.400981370Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=742a8e0b-ef0e-4cee-92c4-aaa72a2beb06 name=/runtime.v1.RuntimeService/Version
	Dec 16 21:14:09 no-preload-232338 crio[723]: time="2024-12-16 21:14:09.402261203Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5e34b5dc-257f-4e85-93e5-344196295c22 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:14:09 no-preload-232338 crio[723]: time="2024-12-16 21:14:09.402766560Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383649402739178,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100999,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5e34b5dc-257f-4e85-93e5-344196295c22 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:14:09 no-preload-232338 crio[723]: time="2024-12-16 21:14:09.403976532Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4d8388d8-c9ea-4ab1-a858-6a6337f099d7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:14:09 no-preload-232338 crio[723]: time="2024-12-16 21:14:09.404068303Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4d8388d8-c9ea-4ab1-a858-6a6337f099d7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:14:09 no-preload-232338 crio[723]: time="2024-12-16 21:14:09.404291842Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dcd2618255da99a588a2bbff1366ef3ae7975c5b7427ce4189ef2c5fd444ba69,PodSandboxId:4a027b6736fd003341c3b022ecb72f4f64a4e5defb13a88abc98e21dd788c0bc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734383100726836966,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b742666-dfd4-4c9b-95a9-25367ec2a718,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f93fa31c7526abdf13792a5cfc284dc96b300ca36237c5d3d16c389ba6b4b224,PodSandboxId:65da06c38941b22dfca9fe46390b838efe27c4632ba3e624580b97f346eee2e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383100350440496,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-4wwvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c63ab10-dfdd-4aca-b39f-bc9b0e028e5e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb3f8053812ebdd6ac1c1b3990ea33cff2d03a50d43c7f54310432574636e2a6,PodSandboxId:5d2b63620968f65012abd2f9f158fabb1cc6a4681aab5225078b4706815f5f06,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383100100017809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-c4qfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9
bf3125-1e6d-4794-a2e6-2ff7ed5132b1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ca52a5e130b887ba85db3ae0ebc536eabd241b8b47bd574e2312e53de9ed7e6,PodSandboxId:6bc53584513262f57a230b5ca6bd863547fef49c2bb8a3889b4b264e6e89075d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:
1734383099371151110,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5hq8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0d357a-dda2-4508-a954-5c67eaf5b8ac,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f644bafa71082b2f43c37e4984bcf95201973c749ec322d44cf504a64879cf1d,PodSandboxId:2a9a6364a517696e5951ef6bd5c50ed7cc3fb1a61088318b4a9964e81c900bf8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1734383088510061093,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31d0531328a1e22e77c38d5296534b60,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d80b96bf35cfc3a12f52bfdb0f2e4eae378235fd658763bc0054a669a0e7919a,PodSandboxId:e40606d090d61ee9e28ab4fbfec4316a013f9eb9e3c827afd055c3cfc5929844,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:173438308846219
1813,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5f24463af3d3cd6c412e107e62d9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18cb850ac82ce6321ec8c820d2b187338227813eb490151ac8c15b3c8185fc60,PodSandboxId:45454fd600089289d1da856ebc8e119c0fb670019408f2221c8547d8eb4dc690,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1734383088387050693,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a928546caa71eb5802e4715858850ef,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:385603c4d7165566e9b078308e8ed0ab97e4f8623edef92149ca1315ea5bcecd,PodSandboxId:d0ad047ea69298677e8b3012f5974254f73924ffe29df1b7d429392a5a03a9dc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1734383088345861879,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d61d90d3fc49432c3d4314e8cdc6846,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54a482a8f0d22fa1dbe26bce347ead5e18fdbf256a99bfe5ede3c5c070c44e8c,PodSandboxId:63e38a0cdd4ab48bf512e430486626bc6ba5d8812126d7f48544b696d00fe7c6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1734382800623517087,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a928546caa71eb5802e4715858850ef,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4d8388d8-c9ea-4ab1-a858-6a6337f099d7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dcd2618255da9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   4a027b6736fd0       storage-provisioner
	f93fa31c7526a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   65da06c38941b       coredns-668d6bf9bc-4wwvd
	eb3f8053812eb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   5d2b63620968f       coredns-668d6bf9bc-c4qfj
	9ca52a5e130b8       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   9 minutes ago       Running             kube-proxy                0                   6bc5358451326       kube-proxy-m5hq8
	f644bafa71082       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   9 minutes ago       Running             kube-controller-manager   3                   2a9a6364a5176       kube-controller-manager-no-preload-232338
	d80b96bf35cfc       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   9 minutes ago       Running             kube-scheduler            2                   e40606d090d61       kube-scheduler-no-preload-232338
	18cb850ac82ce       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   9 minutes ago       Running             kube-apiserver            3                   45454fd600089       kube-apiserver-no-preload-232338
	385603c4d7165       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   9 minutes ago       Running             etcd                      2                   d0ad047ea6929       etcd-no-preload-232338
	54a482a8f0d22       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   14 minutes ago      Exited              kube-apiserver            2                   63e38a0cdd4ab       kube-apiserver-no-preload-232338
	
	
	==> coredns [eb3f8053812ebdd6ac1c1b3990ea33cff2d03a50d43c7f54310432574636e2a6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [f93fa31c7526abdf13792a5cfc284dc96b300ca36237c5d3d16c389ba6b4b224] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-232338
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-232338
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=74e51ab701402ddc00f8ba70f2a2775c7dcd6477
	                    minikube.k8s.io/name=no-preload-232338
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_16T21_04_54_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Dec 2024 21:04:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-232338
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Dec 2024 21:14:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Dec 2024 21:13:03 +0000   Mon, 16 Dec 2024 21:04:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Dec 2024 21:13:03 +0000   Mon, 16 Dec 2024 21:04:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Dec 2024 21:13:03 +0000   Mon, 16 Dec 2024 21:04:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Dec 2024 21:13:03 +0000   Mon, 16 Dec 2024 21:04:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.240
	  Hostname:    no-preload-232338
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4d6d4b6a19254597baed2d6b2e63d93a
	  System UUID:                4d6d4b6a-1925-4597-baed-2d6b2e63d93a
	  Boot ID:                    c70c7922-4b19-43b3-83da-8cb42766b38e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-4wwvd                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m11s
	  kube-system                 coredns-668d6bf9bc-c4qfj                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m11s
	  kube-system                 etcd-no-preload-232338                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m15s
	  kube-system                 kube-apiserver-no-preload-232338             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-controller-manager-no-preload-232338    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-proxy-m5hq8                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m11s
	  kube-system                 kube-scheduler-no-preload-232338             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 metrics-server-f79f97bbb-l7dcr               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m9s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m9s   kube-proxy       
	  Normal  Starting                 9m16s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m15s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m15s  kubelet          Node no-preload-232338 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m15s  kubelet          Node no-preload-232338 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m15s  kubelet          Node no-preload-232338 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m12s  node-controller  Node no-preload-232338 event: Registered Node no-preload-232338 in Controller
	
	
	==> dmesg <==
	[  +4.983868] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.883837] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.605841] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.019208] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.070875] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057104] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.175964] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +0.152133] systemd-fstab-generator[683]: Ignoring "noauto" option for root device
	[  +0.293255] systemd-fstab-generator[713]: Ignoring "noauto" option for root device
	[ +16.626473] systemd-fstab-generator[1314]: Ignoring "noauto" option for root device
	[  +0.068009] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.261031] systemd-fstab-generator[1437]: Ignoring "noauto" option for root device
	[ +23.478187] kauditd_printk_skb: 90 callbacks suppressed
	[Dec16 21:00] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.175760] kauditd_printk_skb: 40 callbacks suppressed
	[ +36.814798] kauditd_printk_skb: 31 callbacks suppressed
	[Dec16 21:04] systemd-fstab-generator[3324]: Ignoring "noauto" option for root device
	[  +0.063880] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.496549] systemd-fstab-generator[3674]: Ignoring "noauto" option for root device
	[  +0.081770] kauditd_printk_skb: 55 callbacks suppressed
	[  +4.907189] systemd-fstab-generator[3787]: Ignoring "noauto" option for root device
	[  +0.095272] kauditd_printk_skb: 12 callbacks suppressed
	[Dec16 21:05] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [385603c4d7165566e9b078308e8ed0ab97e4f8623edef92149ca1315ea5bcecd] <==
	{"level":"info","ts":"2024-12-16T21:04:48.819590Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-16T21:04:48.819886Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"ee01ff8259a5f1e0","initial-advertise-peer-urls":["https://192.168.50.240:2380"],"listen-peer-urls":["https://192.168.50.240:2380"],"advertise-client-urls":["https://192.168.50.240:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.240:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-16T21:04:48.819939Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-16T21:04:48.820044Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.50.240:2380"}
	{"level":"info","ts":"2024-12-16T21:04:48.820076Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.50.240:2380"}
	{"level":"info","ts":"2024-12-16T21:04:49.461430Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ee01ff8259a5f1e0 is starting a new election at term 1"}
	{"level":"info","ts":"2024-12-16T21:04:49.461613Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ee01ff8259a5f1e0 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-12-16T21:04:49.461713Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ee01ff8259a5f1e0 received MsgPreVoteResp from ee01ff8259a5f1e0 at term 1"}
	{"level":"info","ts":"2024-12-16T21:04:49.461771Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ee01ff8259a5f1e0 became candidate at term 2"}
	{"level":"info","ts":"2024-12-16T21:04:49.461816Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ee01ff8259a5f1e0 received MsgVoteResp from ee01ff8259a5f1e0 at term 2"}
	{"level":"info","ts":"2024-12-16T21:04:49.461866Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ee01ff8259a5f1e0 became leader at term 2"}
	{"level":"info","ts":"2024-12-16T21:04:49.461896Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ee01ff8259a5f1e0 elected leader ee01ff8259a5f1e0 at term 2"}
	{"level":"info","ts":"2024-12-16T21:04:49.465549Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"ee01ff8259a5f1e0","local-member-attributes":"{Name:no-preload-232338 ClientURLs:[https://192.168.50.240:2379]}","request-path":"/0/members/ee01ff8259a5f1e0/attributes","cluster-id":"f821e93ad39fa3f0","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-16T21:04:49.467371Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-16T21:04:49.467899Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-16T21:04:49.468385Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-16T21:04:49.469477Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f821e93ad39fa3f0","local-member-id":"ee01ff8259a5f1e0","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-16T21:04:49.469710Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-16T21:04:49.469761Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-16T21:04:49.470130Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-16T21:04:49.475854Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-16T21:04:49.490484Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-16T21:04:49.490607Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-16T21:04:49.495049Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-16T21:04:49.499991Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.240:2379"}
	
	
	==> kernel <==
	 21:14:09 up 15 min,  0 users,  load average: 0.55, 0.30, 0.23
	Linux no-preload-232338 5.10.207 #1 SMP Thu Dec 12 23:38:00 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [18cb850ac82ce6321ec8c820d2b187338227813eb490151ac8c15b3c8185fc60] <==
	W1216 21:09:52.230222       1 handler_proxy.go:99] no RequestInfo found in the context
	E1216 21:09:52.230392       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1216 21:09:52.231389       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1216 21:09:52.231441       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1216 21:10:52.231772       1 handler_proxy.go:99] no RequestInfo found in the context
	E1216 21:10:52.231905       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1216 21:10:52.231961       1 handler_proxy.go:99] no RequestInfo found in the context
	E1216 21:10:52.231984       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1216 21:10:52.233137       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1216 21:10:52.233193       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1216 21:12:52.233911       1 handler_proxy.go:99] no RequestInfo found in the context
	E1216 21:12:52.234386       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1216 21:12:52.234289       1 handler_proxy.go:99] no RequestInfo found in the context
	E1216 21:12:52.234572       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1216 21:12:52.235629       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1216 21:12:52.235727       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [54a482a8f0d22fa1dbe26bce347ead5e18fdbf256a99bfe5ede3c5c070c44e8c] <==
	W1216 21:04:40.813411       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:40.821897       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:40.827672       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:40.831069       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:40.852788       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:40.855210       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:40.895165       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:40.898844       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:40.921949       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:40.963188       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:41.016200       1 logging.go:55] [core] [Channel #199 SubChannel #200]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:41.019928       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:41.031971       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:41.157836       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:41.391532       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:41.526530       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:41.841869       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:45.420840       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:45.437601       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:45.449663       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:45.569142       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:45.634038       1 logging.go:55] [core] [Channel #199 SubChannel #200]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:45.664593       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:45.693700       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:45.701159       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [f644bafa71082b2f43c37e4984bcf95201973c749ec322d44cf504a64879cf1d] <==
	E1216 21:08:57.794299       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:08:57.872294       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:09:27.800968       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:09:27.881391       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:09:57.807828       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:09:57.890394       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:10:27.815843       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:10:27.899221       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:10:57.824243       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:10:57.907093       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1216 21:11:05.065772       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="236.028µs"
	I1216 21:11:20.064736       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="78.85µs"
	E1216 21:11:27.830605       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:11:27.916294       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:11:57.837163       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:11:57.923928       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:12:27.844376       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:12:27.934172       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:12:57.853026       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:12:57.947703       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1216 21:13:03.042471       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-232338"
	E1216 21:13:27.859223       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:13:27.958276       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:13:57.868575       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:13:57.968465       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [9ca52a5e130b887ba85db3ae0ebc536eabd241b8b47bd574e2312e53de9ed7e6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1216 21:05:00.015519       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1216 21:05:00.047397       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.50.240"]
	E1216 21:05:00.047515       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 21:05:00.566393       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1216 21:05:00.566456       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1216 21:05:00.566483       1 server_linux.go:170] "Using iptables Proxier"
	I1216 21:05:00.602172       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 21:05:00.602632       1 server.go:497] "Version info" version="v1.32.0"
	I1216 21:05:00.602665       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 21:05:00.687521       1 config.go:199] "Starting service config controller"
	I1216 21:05:00.687668       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1216 21:05:00.687788       1 config.go:105] "Starting endpoint slice config controller"
	I1216 21:05:00.687896       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1216 21:05:00.695942       1 config.go:329] "Starting node config controller"
	I1216 21:05:00.696267       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1216 21:05:00.787875       1 shared_informer.go:320] Caches are synced for service config
	I1216 21:05:00.787960       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1216 21:05:00.812727       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d80b96bf35cfc3a12f52bfdb0f2e4eae378235fd658763bc0054a669a0e7919a] <==
	W1216 21:04:51.266841       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1216 21:04:51.266851       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 21:04:51.267455       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1216 21:04:51.267493       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 21:04:52.077506       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1216 21:04:52.077565       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 21:04:52.123175       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1216 21:04:52.123296       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1216 21:04:52.244516       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1216 21:04:52.244636       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 21:04:52.371942       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1216 21:04:52.372001       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 21:04:52.402837       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E1216 21:04:52.402897       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 21:04:52.404564       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1216 21:04:52.404622       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 21:04:52.525409       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1216 21:04:52.525485       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 21:04:52.580017       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1216 21:04:52.580079       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 21:04:52.605416       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1216 21:04:52.605452       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 21:04:52.625399       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1216 21:04:52.625623       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1216 21:04:55.252595       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 16 21:12:58 no-preload-232338 kubelet[3681]: E1216 21:12:58.046189    3681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-l7dcr" podUID="fabafb40-1cb8-427b-88a6-37eeb6fd5b77"
	Dec 16 21:13:04 no-preload-232338 kubelet[3681]: E1216 21:13:04.272564    3681 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383584272236087,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100999,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:13:04 no-preload-232338 kubelet[3681]: E1216 21:13:04.272604    3681 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383584272236087,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100999,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:13:10 no-preload-232338 kubelet[3681]: E1216 21:13:10.044444    3681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-l7dcr" podUID="fabafb40-1cb8-427b-88a6-37eeb6fd5b77"
	Dec 16 21:13:14 no-preload-232338 kubelet[3681]: E1216 21:13:14.274442    3681 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383594273459072,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100999,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:13:14 no-preload-232338 kubelet[3681]: E1216 21:13:14.274468    3681 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383594273459072,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100999,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:13:23 no-preload-232338 kubelet[3681]: E1216 21:13:23.044281    3681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-l7dcr" podUID="fabafb40-1cb8-427b-88a6-37eeb6fd5b77"
	Dec 16 21:13:24 no-preload-232338 kubelet[3681]: E1216 21:13:24.275487    3681 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383604275163246,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100999,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:13:24 no-preload-232338 kubelet[3681]: E1216 21:13:24.275531    3681 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383604275163246,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100999,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:13:34 no-preload-232338 kubelet[3681]: E1216 21:13:34.282389    3681 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383614281028382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100999,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:13:34 no-preload-232338 kubelet[3681]: E1216 21:13:34.283192    3681 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383614281028382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100999,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:13:35 no-preload-232338 kubelet[3681]: E1216 21:13:35.044024    3681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-l7dcr" podUID="fabafb40-1cb8-427b-88a6-37eeb6fd5b77"
	Dec 16 21:13:44 no-preload-232338 kubelet[3681]: E1216 21:13:44.285696    3681 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383624284977840,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100999,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:13:44 no-preload-232338 kubelet[3681]: E1216 21:13:44.285888    3681 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383624284977840,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100999,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:13:49 no-preload-232338 kubelet[3681]: E1216 21:13:49.044001    3681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-l7dcr" podUID="fabafb40-1cb8-427b-88a6-37eeb6fd5b77"
	Dec 16 21:13:54 no-preload-232338 kubelet[3681]: E1216 21:13:54.078918    3681 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 16 21:13:54 no-preload-232338 kubelet[3681]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 16 21:13:54 no-preload-232338 kubelet[3681]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 16 21:13:54 no-preload-232338 kubelet[3681]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 16 21:13:54 no-preload-232338 kubelet[3681]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 16 21:13:54 no-preload-232338 kubelet[3681]: E1216 21:13:54.289004    3681 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383634288221677,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100999,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:13:54 no-preload-232338 kubelet[3681]: E1216 21:13:54.289037    3681 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383634288221677,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100999,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:14:04 no-preload-232338 kubelet[3681]: E1216 21:14:04.047563    3681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-l7dcr" podUID="fabafb40-1cb8-427b-88a6-37eeb6fd5b77"
	Dec 16 21:14:04 no-preload-232338 kubelet[3681]: E1216 21:14:04.290133    3681 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383644289866453,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100999,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:14:04 no-preload-232338 kubelet[3681]: E1216 21:14:04.290176    3681 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383644289866453,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100999,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [dcd2618255da99a588a2bbff1366ef3ae7975c5b7427ce4189ef2c5fd444ba69] <==
	I1216 21:05:00.871168       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 21:05:00.899586       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 21:05:00.899632       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1216 21:05:00.926031       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 21:05:00.926216       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-232338_676ddeb4-4c29-4c65-b900-27842ee95fa7!
	I1216 21:05:00.928108       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4f00ba71-898c-4f68-a46e-15b5734a6f46", APIVersion:"v1", ResourceVersion:"391", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-232338_676ddeb4-4c29-4c65-b900-27842ee95fa7 became leader
	I1216 21:05:01.026539       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-232338_676ddeb4-4c29-4c65-b900-27842ee95fa7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-232338 -n no-preload-232338
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-232338 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-l7dcr
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-232338 describe pod metrics-server-f79f97bbb-l7dcr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-232338 describe pod metrics-server-f79f97bbb-l7dcr: exit status 1 (66.414879ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-l7dcr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-232338 describe pod metrics-server-f79f97bbb-l7dcr: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1216 21:05:50.481031   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/functional-782219/client.crt: no such file or directory" logger="UnhandledError"
E1216 21:07:13.884637   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-606219 -n embed-certs-606219
start_stop_delete_test.go:272: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-12-16 21:14:31.785282502 +0000 UTC m=+5990.258930226
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-606219 -n embed-certs-606219
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-606219 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-606219 logs -n 25: (2.195119247s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p stopped-upgrade-976873                              | stopped-upgrade-976873       | jenkins | v1.34.0 | 16 Dec 24 20:49 UTC | 16 Dec 24 20:50 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-560677                           | kubernetes-upgrade-560677    | jenkins | v1.34.0 | 16 Dec 24 20:50 UTC | 16 Dec 24 20:50 UTC |
	| start   | -p no-preload-232338                                   | no-preload-232338            | jenkins | v1.34.0 | 16 Dec 24 20:50 UTC | 16 Dec 24 20:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-976873                              | stopped-upgrade-976873       | jenkins | v1.34.0 | 16 Dec 24 20:50 UTC | 16 Dec 24 20:50 UTC |
	| start   | -p embed-certs-606219                                  | embed-certs-606219           | jenkins | v1.34.0 | 16 Dec 24 20:50 UTC | 16 Dec 24 20:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-270954                              | cert-expiration-270954       | jenkins | v1.34.0 | 16 Dec 24 20:51 UTC | 16 Dec 24 20:51 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-606219            | embed-certs-606219           | jenkins | v1.34.0 | 16 Dec 24 20:51 UTC | 16 Dec 24 20:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-606219                                  | embed-certs-606219           | jenkins | v1.34.0 | 16 Dec 24 20:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-270954                              | cert-expiration-270954       | jenkins | v1.34.0 | 16 Dec 24 20:51 UTC | 16 Dec 24 20:51 UTC |
	| delete  | -p                                                     | disable-driver-mounts-384008 | jenkins | v1.34.0 | 16 Dec 24 20:51 UTC | 16 Dec 24 20:51 UTC |
	|         | disable-driver-mounts-384008                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-327790 | jenkins | v1.34.0 | 16 Dec 24 20:51 UTC | 16 Dec 24 20:52 UTC |
	|         | default-k8s-diff-port-327790                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-232338             | no-preload-232338            | jenkins | v1.34.0 | 16 Dec 24 20:52 UTC | 16 Dec 24 20:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-232338                                   | no-preload-232338            | jenkins | v1.34.0 | 16 Dec 24 20:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-327790  | default-k8s-diff-port-327790 | jenkins | v1.34.0 | 16 Dec 24 20:52 UTC | 16 Dec 24 20:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-327790 | jenkins | v1.34.0 | 16 Dec 24 20:52 UTC |                     |
	|         | default-k8s-diff-port-327790                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-847766        | old-k8s-version-847766       | jenkins | v1.34.0 | 16 Dec 24 20:53 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-606219                 | embed-certs-606219           | jenkins | v1.34.0 | 16 Dec 24 20:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-606219                                  | embed-certs-606219           | jenkins | v1.34.0 | 16 Dec 24 20:54 UTC | 16 Dec 24 21:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-232338                  | no-preload-232338            | jenkins | v1.34.0 | 16 Dec 24 20:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-232338                                   | no-preload-232338            | jenkins | v1.34.0 | 16 Dec 24 20:54 UTC | 16 Dec 24 21:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-327790       | default-k8s-diff-port-327790 | jenkins | v1.34.0 | 16 Dec 24 20:55 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-847766                              | old-k8s-version-847766       | jenkins | v1.34.0 | 16 Dec 24 20:55 UTC | 16 Dec 24 20:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-327790 | jenkins | v1.34.0 | 16 Dec 24 20:55 UTC | 16 Dec 24 21:04 UTC |
	|         | default-k8s-diff-port-327790                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-847766             | old-k8s-version-847766       | jenkins | v1.34.0 | 16 Dec 24 20:55 UTC | 16 Dec 24 20:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-847766                              | old-k8s-version-847766       | jenkins | v1.34.0 | 16 Dec 24 20:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 20:55:34
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 20:55:34.390724   60933 out.go:345] Setting OutFile to fd 1 ...
	I1216 20:55:34.390973   60933 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:55:34.390982   60933 out.go:358] Setting ErrFile to fd 2...
	I1216 20:55:34.390986   60933 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:55:34.391166   60933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-7083/.minikube/bin
	I1216 20:55:34.391763   60933 out.go:352] Setting JSON to false
	I1216 20:55:34.392611   60933 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5879,"bootTime":1734376655,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 20:55:34.392675   60933 start.go:139] virtualization: kvm guest
	I1216 20:55:34.394822   60933 out.go:177] * [old-k8s-version-847766] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1216 20:55:34.396184   60933 notify.go:220] Checking for updates...
	I1216 20:55:34.396201   60933 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 20:55:34.397724   60933 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 20:55:34.399130   60933 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 20:55:34.400470   60933 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-7083/.minikube
	I1216 20:55:34.401934   60933 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 20:55:34.403341   60933 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 20:55:34.405179   60933 config.go:182] Loaded profile config "old-k8s-version-847766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1216 20:55:34.405571   60933 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:55:34.405650   60933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:55:34.421052   60933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41215
	I1216 20:55:34.421523   60933 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:55:34.422018   60933 main.go:141] libmachine: Using API Version  1
	I1216 20:55:34.422035   60933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:55:34.422373   60933 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:55:34.422646   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:55:34.424565   60933 out.go:177] * Kubernetes 1.32.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.0
	I1216 20:55:34.426088   60933 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 20:55:34.426419   60933 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:55:34.426474   60933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:55:34.441375   60933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36915
	I1216 20:55:34.441833   60933 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:55:34.442327   60933 main.go:141] libmachine: Using API Version  1
	I1216 20:55:34.442349   60933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:55:34.442658   60933 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:55:34.442852   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:55:34.480512   60933 out.go:177] * Using the kvm2 driver based on existing profile
	I1216 20:55:34.481972   60933 start.go:297] selected driver: kvm2
	I1216 20:55:34.481988   60933 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-847766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-847766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 20:55:34.482125   60933 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 20:55:34.482826   60933 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 20:55:34.482907   60933 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20091-7083/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1216 20:55:34.498561   60933 install.go:137] /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1216 20:55:34.498953   60933 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 20:55:34.498981   60933 cni.go:84] Creating CNI manager for ""
	I1216 20:55:34.499022   60933 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 20:55:34.499060   60933 start.go:340] cluster config:
	{Name:old-k8s-version-847766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-847766 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 20:55:34.499164   60933 iso.go:125] acquiring lock: {Name:mk60ed2ba7ed00047edacd09f4f6bf84214f0831 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 20:55:34.501128   60933 out.go:177] * Starting "old-k8s-version-847766" primary control-plane node in "old-k8s-version-847766" cluster
	I1216 20:55:29.827520   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:55:32.899553   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:55:30.468027   60829 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 20:55:30.468071   60829 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1216 20:55:30.468079   60829 cache.go:56] Caching tarball of preloaded images
	I1216 20:55:30.468192   60829 preload.go:172] Found /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 20:55:30.468206   60829 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1216 20:55:30.468310   60829 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/config.json ...
	I1216 20:55:30.468540   60829 start.go:360] acquireMachinesLock for default-k8s-diff-port-327790: {Name:mk014ce1133f8d018fee1f78c9c31a354da6dd77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 20:55:34.502579   60933 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1216 20:55:34.502609   60933 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1216 20:55:34.502615   60933 cache.go:56] Caching tarball of preloaded images
	I1216 20:55:34.502716   60933 preload.go:172] Found /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 20:55:34.502731   60933 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1216 20:55:34.502823   60933 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/config.json ...
	I1216 20:55:34.503011   60933 start.go:360] acquireMachinesLock for old-k8s-version-847766: {Name:mk014ce1133f8d018fee1f78c9c31a354da6dd77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 20:55:38.979556   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:55:42.051532   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:55:48.131588   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:55:51.203568   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:55:57.283622   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:00.355490   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:06.435543   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:09.507559   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:15.587526   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:18.659657   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:24.739528   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:27.811498   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:33.891518   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:36.963554   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:43.043553   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:46.115578   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:52.195583   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:55.267507   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:01.347591   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:04.419562   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:10.499479   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:13.571540   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:19.651541   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:22.723545   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:28.803551   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:31.875527   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:37.955563   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:41.027520   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:47.107494   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:50.179566   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:56.259550   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:59.331540   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:05.411562   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:08.483592   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:14.563574   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:17.635522   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:23.715548   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:26.787559   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:32.867539   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:35.939502   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:42.019562   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:45.091545   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:51.171521   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:54.243542   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:57.248710   60421 start.go:364] duration metric: took 4m14.403979547s to acquireMachinesLock for "no-preload-232338"
	I1216 20:58:57.248796   60421 start.go:96] Skipping create...Using existing machine configuration
	I1216 20:58:57.248804   60421 fix.go:54] fixHost starting: 
	I1216 20:58:57.249232   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:58:57.249288   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:58:57.264905   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39773
	I1216 20:58:57.265423   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:58:57.265982   60421 main.go:141] libmachine: Using API Version  1
	I1216 20:58:57.266005   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:58:57.266396   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:58:57.266636   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:58:57.266807   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetState
	I1216 20:58:57.268705   60421 fix.go:112] recreateIfNeeded on no-preload-232338: state=Stopped err=<nil>
	I1216 20:58:57.268730   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	W1216 20:58:57.268918   60421 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 20:58:57.270855   60421 out.go:177] * Restarting existing kvm2 VM for "no-preload-232338" ...
	I1216 20:58:57.272142   60421 main.go:141] libmachine: (no-preload-232338) Calling .Start
	I1216 20:58:57.272374   60421 main.go:141] libmachine: (no-preload-232338) Ensuring networks are active...
	I1216 20:58:57.273245   60421 main.go:141] libmachine: (no-preload-232338) Ensuring network default is active
	I1216 20:58:57.273660   60421 main.go:141] libmachine: (no-preload-232338) Ensuring network mk-no-preload-232338 is active
	I1216 20:58:57.274025   60421 main.go:141] libmachine: (no-preload-232338) Getting domain xml...
	I1216 20:58:57.274673   60421 main.go:141] libmachine: (no-preload-232338) Creating domain...
	I1216 20:58:57.245632   60215 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 20:58:57.245753   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetMachineName
	I1216 20:58:57.246111   60215 buildroot.go:166] provisioning hostname "embed-certs-606219"
	I1216 20:58:57.246149   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetMachineName
	I1216 20:58:57.246399   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 20:58:57.248517   60215 machine.go:96] duration metric: took 4m37.414570479s to provisionDockerMachine
	I1216 20:58:57.248579   60215 fix.go:56] duration metric: took 4m37.437232743s for fixHost
	I1216 20:58:57.248587   60215 start.go:83] releasing machines lock for "embed-certs-606219", held for 4m37.437262865s
	W1216 20:58:57.248614   60215 start.go:714] error starting host: provision: host is not running
	W1216 20:58:57.248791   60215 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1216 20:58:57.248801   60215 start.go:729] Will try again in 5 seconds ...
	I1216 20:58:58.506521   60421 main.go:141] libmachine: (no-preload-232338) Waiting to get IP...
	I1216 20:58:58.507302   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:58:58.507627   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:58:58.507699   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:58:58.507613   61660 retry.go:31] will retry after 230.281045ms: waiting for machine to come up
	I1216 20:58:58.739343   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:58:58.739781   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:58:58.739804   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:58:58.739741   61660 retry.go:31] will retry after 323.962271ms: waiting for machine to come up
	I1216 20:58:59.065388   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:58:59.065856   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:58:59.065884   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:58:59.065816   61660 retry.go:31] will retry after 364.058481ms: waiting for machine to come up
	I1216 20:58:59.431290   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:58:59.431680   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:58:59.431707   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:58:59.431631   61660 retry.go:31] will retry after 569.845721ms: waiting for machine to come up
	I1216 20:59:00.003562   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:00.004030   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:00.004093   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:00.003988   61660 retry.go:31] will retry after 728.729909ms: waiting for machine to come up
	I1216 20:59:00.733954   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:00.734450   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:00.734482   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:00.734388   61660 retry.go:31] will retry after 679.479889ms: waiting for machine to come up
	I1216 20:59:01.415289   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:01.415739   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:01.415763   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:01.415690   61660 retry.go:31] will retry after 1.136560245s: waiting for machine to come up
	I1216 20:59:02.554094   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:02.554523   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:02.554548   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:02.554470   61660 retry.go:31] will retry after 1.299578742s: waiting for machine to come up
	I1216 20:59:02.250499   60215 start.go:360] acquireMachinesLock for embed-certs-606219: {Name:mk014ce1133f8d018fee1f78c9c31a354da6dd77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 20:59:03.855999   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:03.856366   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:03.856399   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:03.856300   61660 retry.go:31] will retry after 1.761269163s: waiting for machine to come up
	I1216 20:59:05.620383   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:05.620837   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:05.620858   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:05.620818   61660 retry.go:31] will retry after 2.100894301s: waiting for machine to come up
	I1216 20:59:07.723931   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:07.724300   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:07.724322   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:07.724273   61660 retry.go:31] will retry after 2.57501483s: waiting for machine to come up
	I1216 20:59:10.302185   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:10.302766   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:10.302802   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:10.302706   61660 retry.go:31] will retry after 2.68456895s: waiting for machine to come up
	I1216 20:59:17.060397   60829 start.go:364] duration metric: took 3m46.591813882s to acquireMachinesLock for "default-k8s-diff-port-327790"
	I1216 20:59:17.060456   60829 start.go:96] Skipping create...Using existing machine configuration
	I1216 20:59:17.060462   60829 fix.go:54] fixHost starting: 
	I1216 20:59:17.060878   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:59:17.060935   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:59:17.079226   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41365
	I1216 20:59:17.079715   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:59:17.080173   60829 main.go:141] libmachine: Using API Version  1
	I1216 20:59:17.080202   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:59:17.080554   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:59:17.080731   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:59:17.080873   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetState
	I1216 20:59:17.082368   60829 fix.go:112] recreateIfNeeded on default-k8s-diff-port-327790: state=Stopped err=<nil>
	I1216 20:59:17.082399   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	W1216 20:59:17.082570   60829 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 20:59:17.085104   60829 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-327790" ...
	I1216 20:59:12.988787   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:12.989140   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:12.989172   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:12.989098   61660 retry.go:31] will retry after 2.793178881s: waiting for machine to come up
	I1216 20:59:15.786011   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.786518   60421 main.go:141] libmachine: (no-preload-232338) Found IP for machine: 192.168.50.240
	I1216 20:59:15.786540   60421 main.go:141] libmachine: (no-preload-232338) Reserving static IP address...
	I1216 20:59:15.786564   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has current primary IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.786948   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "no-preload-232338", mac: "52:54:00:07:00:29", ip: "192.168.50.240"} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:15.786983   60421 main.go:141] libmachine: (no-preload-232338) DBG | skip adding static IP to network mk-no-preload-232338 - found existing host DHCP lease matching {name: "no-preload-232338", mac: "52:54:00:07:00:29", ip: "192.168.50.240"}
	I1216 20:59:15.786995   60421 main.go:141] libmachine: (no-preload-232338) Reserved static IP address: 192.168.50.240
	I1216 20:59:15.787009   60421 main.go:141] libmachine: (no-preload-232338) Waiting for SSH to be available...
	I1216 20:59:15.787022   60421 main.go:141] libmachine: (no-preload-232338) DBG | Getting to WaitForSSH function...
	I1216 20:59:15.789175   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.789509   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:15.789542   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.789633   60421 main.go:141] libmachine: (no-preload-232338) DBG | Using SSH client type: external
	I1216 20:59:15.789674   60421 main.go:141] libmachine: (no-preload-232338) DBG | Using SSH private key: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa (-rw-------)
	I1216 20:59:15.789709   60421 main.go:141] libmachine: (no-preload-232338) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1216 20:59:15.789718   60421 main.go:141] libmachine: (no-preload-232338) DBG | About to run SSH command:
	I1216 20:59:15.789726   60421 main.go:141] libmachine: (no-preload-232338) DBG | exit 0
	I1216 20:59:15.915980   60421 main.go:141] libmachine: (no-preload-232338) DBG | SSH cmd err, output: <nil>: 
	I1216 20:59:15.916473   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetConfigRaw
	I1216 20:59:15.917088   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetIP
	I1216 20:59:15.919782   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.920161   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:15.920192   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.920408   60421 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/config.json ...
	I1216 20:59:15.920636   60421 machine.go:93] provisionDockerMachine start ...
	I1216 20:59:15.920654   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:59:15.920875   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:15.923221   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.923623   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:15.923650   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.923784   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:15.923971   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:15.924107   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:15.924246   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:15.924395   60421 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:15.924715   60421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.240 22 <nil> <nil>}
	I1216 20:59:15.924729   60421 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 20:59:16.032079   60421 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1216 20:59:16.032108   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetMachineName
	I1216 20:59:16.032397   60421 buildroot.go:166] provisioning hostname "no-preload-232338"
	I1216 20:59:16.032423   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetMachineName
	I1216 20:59:16.032649   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:16.035467   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.035798   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.035826   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.036003   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:16.036184   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.036335   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.036494   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:16.036679   60421 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:16.036847   60421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.240 22 <nil> <nil>}
	I1216 20:59:16.036859   60421 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-232338 && echo "no-preload-232338" | sudo tee /etc/hostname
	I1216 20:59:16.161958   60421 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-232338
	
	I1216 20:59:16.161996   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:16.164585   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.165086   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.165130   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.165370   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:16.165578   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.165746   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.165877   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:16.166015   60421 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:16.166188   60421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.240 22 <nil> <nil>}
	I1216 20:59:16.166204   60421 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-232338' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-232338/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-232338' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 20:59:16.285329   60421 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 20:59:16.285374   60421 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20091-7083/.minikube CaCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20091-7083/.minikube}
	I1216 20:59:16.285407   60421 buildroot.go:174] setting up certificates
	I1216 20:59:16.285422   60421 provision.go:84] configureAuth start
	I1216 20:59:16.285432   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetMachineName
	I1216 20:59:16.285764   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetIP
	I1216 20:59:16.288773   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.289161   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.289192   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.289405   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:16.291687   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.292042   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.292072   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.292190   60421 provision.go:143] copyHostCerts
	I1216 20:59:16.292260   60421 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem, removing ...
	I1216 20:59:16.292274   60421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem
	I1216 20:59:16.292343   60421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem (1123 bytes)
	I1216 20:59:16.292470   60421 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem, removing ...
	I1216 20:59:16.292481   60421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem
	I1216 20:59:16.292508   60421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem (1679 bytes)
	I1216 20:59:16.292563   60421 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem, removing ...
	I1216 20:59:16.292570   60421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem
	I1216 20:59:16.292590   60421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem (1082 bytes)
	I1216 20:59:16.292649   60421 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem org=jenkins.no-preload-232338 san=[127.0.0.1 192.168.50.240 localhost minikube no-preload-232338]
	I1216 20:59:16.407096   60421 provision.go:177] copyRemoteCerts
	I1216 20:59:16.407187   60421 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 20:59:16.407227   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:16.410400   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.410725   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.410755   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.410977   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:16.411188   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.411437   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:16.411618   60421 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 20:59:16.498456   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 20:59:16.525297   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 20:59:16.551135   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 20:59:16.576040   60421 provision.go:87] duration metric: took 290.601941ms to configureAuth
	I1216 20:59:16.576074   60421 buildroot.go:189] setting minikube options for container-runtime
	I1216 20:59:16.576288   60421 config.go:182] Loaded profile config "no-preload-232338": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 20:59:16.576396   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:16.579169   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.579607   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.579641   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.579795   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:16.580016   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.580165   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.580311   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:16.580467   60421 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:16.580629   60421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.240 22 <nil> <nil>}
	I1216 20:59:16.580643   60421 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 20:59:16.816973   60421 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 20:59:16.816998   60421 machine.go:96] duration metric: took 896.349056ms to provisionDockerMachine
	I1216 20:59:16.817010   60421 start.go:293] postStartSetup for "no-preload-232338" (driver="kvm2")
	I1216 20:59:16.817030   60421 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 20:59:16.817044   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:59:16.817427   60421 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 20:59:16.817454   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:16.820182   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.820550   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.820578   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.820713   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:16.820914   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.821096   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:16.821274   60421 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 20:59:16.906513   60421 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 20:59:16.911314   60421 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 20:59:16.911346   60421 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/addons for local assets ...
	I1216 20:59:16.911482   60421 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/files for local assets ...
	I1216 20:59:16.911589   60421 filesync.go:149] local asset: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem -> 142542.pem in /etc/ssl/certs
	I1216 20:59:16.911720   60421 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 20:59:16.921890   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /etc/ssl/certs/142542.pem (1708 bytes)
	I1216 20:59:16.947114   60421 start.go:296] duration metric: took 130.089628ms for postStartSetup
	I1216 20:59:16.947192   60421 fix.go:56] duration metric: took 19.698385497s for fixHost
	I1216 20:59:16.947229   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:16.950156   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.950543   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.950575   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.950780   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:16.950996   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.951199   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.951394   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:16.951604   60421 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:16.951829   60421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.240 22 <nil> <nil>}
	I1216 20:59:16.951843   60421 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 20:59:17.060233   60421 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734382757.032597424
	
	I1216 20:59:17.060258   60421 fix.go:216] guest clock: 1734382757.032597424
	I1216 20:59:17.060265   60421 fix.go:229] Guest: 2024-12-16 20:59:17.032597424 +0000 UTC Remote: 2024-12-16 20:59:16.947203535 +0000 UTC m=+274.247918927 (delta=85.393889ms)
	I1216 20:59:17.060290   60421 fix.go:200] guest clock delta is within tolerance: 85.393889ms
	I1216 20:59:17.060294   60421 start.go:83] releasing machines lock for "no-preload-232338", held for 19.811539815s
	I1216 20:59:17.060318   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:59:17.060636   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetIP
	I1216 20:59:17.063346   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:17.063742   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:17.063764   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:17.063900   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:59:17.064419   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:59:17.064647   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:59:17.064766   60421 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 20:59:17.064804   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:17.064897   60421 ssh_runner.go:195] Run: cat /version.json
	I1216 20:59:17.064923   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:17.067687   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:17.067897   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:17.068129   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:17.068166   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:17.068314   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:17.068318   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:17.068343   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:17.068491   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:17.068573   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:17.068754   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:17.068778   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:17.068914   60421 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 20:59:17.069085   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:17.069229   60421 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 20:59:17.149502   60421 ssh_runner.go:195] Run: systemctl --version
	I1216 20:59:17.184981   60421 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 20:59:17.335267   60421 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 20:59:17.344316   60421 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 20:59:17.344381   60421 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 20:59:17.362422   60421 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 20:59:17.362450   60421 start.go:495] detecting cgroup driver to use...
	I1216 20:59:17.362526   60421 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 20:59:17.379285   60421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 20:59:17.394451   60421 docker.go:217] disabling cri-docker service (if available) ...
	I1216 20:59:17.394514   60421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 20:59:17.411856   60421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 20:59:17.428028   60421 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 20:59:17.557602   60421 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 20:59:17.699140   60421 docker.go:233] disabling docker service ...
	I1216 20:59:17.699215   60421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 20:59:17.715236   60421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 20:59:17.729268   60421 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 20:59:17.875729   60421 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 20:59:18.007569   60421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 20:59:18.022940   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 20:59:18.042227   60421 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1216 20:59:18.042292   60421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:18.053011   60421 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 20:59:18.053081   60421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:18.063767   60421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:18.074262   60421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:18.085372   60421 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 20:59:18.098366   60421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:18.113619   60421 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:18.134081   60421 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:18.145276   60421 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 20:59:18.155733   60421 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 20:59:18.155806   60421 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 20:59:18.170492   60421 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 20:59:18.182276   60421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:59:18.291278   60421 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 20:59:18.384618   60421 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 20:59:18.384700   60421 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 20:59:18.390755   60421 start.go:563] Will wait 60s for crictl version
	I1216 20:59:18.390823   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:18.395435   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 20:59:18.439300   60421 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 20:59:18.439390   60421 ssh_runner.go:195] Run: crio --version
	I1216 20:59:18.473976   60421 ssh_runner.go:195] Run: crio --version
	I1216 20:59:18.505262   60421 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1216 20:59:17.086569   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Start
	I1216 20:59:17.086752   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Ensuring networks are active...
	I1216 20:59:17.087656   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Ensuring network default is active
	I1216 20:59:17.088082   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Ensuring network mk-default-k8s-diff-port-327790 is active
	I1216 20:59:17.088482   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Getting domain xml...
	I1216 20:59:17.089219   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Creating domain...
	I1216 20:59:18.413245   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting to get IP...
	I1216 20:59:18.414327   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:18.414794   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:18.414907   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:18.414784   61807 retry.go:31] will retry after 229.952775ms: waiting for machine to come up
	I1216 20:59:18.646270   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:18.646677   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:18.646727   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:18.646654   61807 retry.go:31] will retry after 341.342128ms: waiting for machine to come up
	I1216 20:59:18.989285   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:18.989781   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:18.989809   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:18.989740   61807 retry.go:31] will retry after 311.937657ms: waiting for machine to come up
	I1216 20:59:19.303619   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:19.304189   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:19.304221   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:19.304131   61807 retry.go:31] will retry after 515.638431ms: waiting for machine to come up
	I1216 20:59:19.821478   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:19.821955   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:19.821997   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:19.821900   61807 retry.go:31] will retry after 590.835789ms: waiting for machine to come up
	I1216 20:59:18.506840   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetIP
	I1216 20:59:18.510260   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:18.510654   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:18.510689   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:18.510875   60421 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1216 20:59:18.515632   60421 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 20:59:18.529943   60421 kubeadm.go:883] updating cluster {Name:no-preload-232338 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.32.0 ClusterName:no-preload-232338 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.240 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 20:59:18.530128   60421 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 20:59:18.530184   60421 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 20:59:18.569526   60421 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1216 20:59:18.569555   60421 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.32.0 registry.k8s.io/kube-controller-manager:v1.32.0 registry.k8s.io/kube-scheduler:v1.32.0 registry.k8s.io/kube-proxy:v1.32.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.16-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1216 20:59:18.569650   60421 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:59:18.569669   60421 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.16-0
	I1216 20:59:18.569688   60421 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1216 20:59:18.569651   60421 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.32.0
	I1216 20:59:18.569774   60421 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.32.0
	I1216 20:59:18.569859   60421 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 20:59:18.569859   60421 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1216 20:59:18.570294   60421 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.32.0
	I1216 20:59:18.571577   60421 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.32.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.32.0
	I1216 20:59:18.571602   60421 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.16-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.16-0
	I1216 20:59:18.571582   60421 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:59:18.571585   60421 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.32.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.32.0
	I1216 20:59:18.571583   60421 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1216 20:59:18.571580   60421 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.32.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 20:59:18.571828   60421 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.32.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.32.0
	I1216 20:59:18.571953   60421 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1216 20:59:18.781052   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.32.0
	I1216 20:59:18.783569   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.16-0
	I1216 20:59:18.795901   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 20:59:18.799273   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1216 20:59:18.801098   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.32.0
	I1216 20:59:18.802163   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1216 20:59:18.828334   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.32.0
	I1216 20:59:18.897880   60421 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.32.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.32.0" does not exist at hash "a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5" in container runtime
	I1216 20:59:18.897942   60421 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.32.0
	I1216 20:59:18.898003   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:18.910616   60421 cache_images.go:116] "registry.k8s.io/etcd:3.5.16-0" needs transfer: "registry.k8s.io/etcd:3.5.16-0" does not exist at hash "a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc" in container runtime
	I1216 20:59:18.910665   60421 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.16-0
	I1216 20:59:18.910713   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:18.937699   60421 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.32.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.32.0" does not exist at hash "8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3" in container runtime
	I1216 20:59:18.937753   60421 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 20:59:18.937804   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:18.979455   60421 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.32.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.32.0" does not exist at hash "c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4" in container runtime
	I1216 20:59:18.979500   60421 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.32.0
	I1216 20:59:18.979540   60421 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1216 20:59:18.979555   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:18.979586   60421 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1216 20:59:18.979636   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:19.002472   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:59:19.076177   60421 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.32.0" needs transfer: "registry.k8s.io/kube-proxy:v1.32.0" does not exist at hash "040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08" in container runtime
	I1216 20:59:19.076217   60421 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.32.0
	I1216 20:59:19.076237   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.0
	I1216 20:59:19.076252   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:19.076292   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I1216 20:59:19.076351   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 20:59:19.076408   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1216 20:59:19.076487   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.0
	I1216 20:59:19.076511   60421 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1216 20:59:19.076536   60421 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:59:19.076580   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:19.204766   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:59:19.204846   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1216 20:59:19.204904   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.0
	I1216 20:59:19.204959   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.0
	I1216 20:59:19.205097   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.0
	I1216 20:59:19.205212   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I1216 20:59:19.205285   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 20:59:19.365421   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.0
	I1216 20:59:19.365466   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:59:19.365512   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1216 20:59:19.365620   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.0
	I1216 20:59:19.365652   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.0
	I1216 20:59:19.365771   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 20:59:19.365861   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I1216 20:59:19.539614   60421 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1216 20:59:19.539729   60421 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1216 20:59:19.539740   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.0
	I1216 20:59:19.539740   60421 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.0
	I1216 20:59:19.539817   60421 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.0
	I1216 20:59:19.539839   60421 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.32.0
	I1216 20:59:19.539840   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:59:19.539885   60421 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.32.0
	I1216 20:59:19.539949   60421 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.0
	I1216 20:59:19.540000   60421 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0
	I1216 20:59:19.540029   60421 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.32.0
	I1216 20:59:19.540062   60421 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.16-0
	I1216 20:59:19.555043   60421 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.32.0 (exists)
	I1216 20:59:19.555076   60421 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.32.0
	I1216 20:59:19.555135   60421 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.32.0
	I1216 20:59:19.555251   60421 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1216 20:59:19.630857   60421 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.16-0 (exists)
	I1216 20:59:19.630949   60421 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1216 20:59:19.630983   60421 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.0
	I1216 20:59:19.631030   60421 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.32.0 (exists)
	I1216 20:59:19.631065   60421 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.32.0
	I1216 20:59:19.631104   60421 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.32.0 (exists)
	I1216 20:59:19.631069   60421 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1216 20:59:21.838285   60421 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.32.0: (2.283119694s)
	I1216 20:59:21.838328   60421 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.0 from cache
	I1216 20:59:21.838359   60421 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1216 20:59:21.838394   60421 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.20725659s)
	I1216 20:59:21.838414   60421 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1216 20:59:21.838421   60421 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1216 20:59:21.838361   60421 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.32.0: (2.207274997s)
	I1216 20:59:21.838471   60421 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.32.0 (exists)
	I1216 20:59:20.414932   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:20.415565   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:20.415597   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:20.415502   61807 retry.go:31] will retry after 698.152518ms: waiting for machine to come up
	I1216 20:59:21.115103   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:21.115597   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:21.115627   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:21.115543   61807 retry.go:31] will retry after 891.02308ms: waiting for machine to come up
	I1216 20:59:22.008636   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:22.009070   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:22.009098   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:22.009015   61807 retry.go:31] will retry after 923.634312ms: waiting for machine to come up
	I1216 20:59:22.934238   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:22.934753   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:22.934784   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:22.934697   61807 retry.go:31] will retry after 1.142718367s: waiting for machine to come up
	I1216 20:59:24.078935   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:24.079398   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:24.079429   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:24.079363   61807 retry.go:31] will retry after 1.541033224s: waiting for machine to come up
	I1216 20:59:23.901058   60421 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.062611423s)
	I1216 20:59:23.901091   60421 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1216 20:59:23.901122   60421 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.16-0
	I1216 20:59:23.901169   60421 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.16-0
	I1216 20:59:25.621932   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:25.622401   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:25.622433   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:25.622364   61807 retry.go:31] will retry after 2.600280234s: waiting for machine to come up
	I1216 20:59:28.224296   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:28.224874   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:28.224892   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:28.224828   61807 retry.go:31] will retry after 3.308841216s: waiting for machine to come up
	I1216 20:59:27.793238   60421 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.16-0: (3.892042799s)
	I1216 20:59:27.793280   60421 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 from cache
	I1216 20:59:27.793321   60421 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.32.0
	I1216 20:59:27.793420   60421 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.32.0
	I1216 20:59:29.552069   60421 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.32.0: (1.758623471s)
	I1216 20:59:29.552102   60421 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.0 from cache
	I1216 20:59:29.552130   60421 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.32.0
	I1216 20:59:29.552177   60421 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.32.0
	I1216 20:59:31.708930   60421 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.32.0: (2.156719559s)
	I1216 20:59:31.708971   60421 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.0 from cache
	I1216 20:59:31.709008   60421 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1216 20:59:31.709057   60421 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1216 20:59:32.660657   60421 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1216 20:59:32.660713   60421 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.32.0
	I1216 20:59:32.660775   60421 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.32.0
	I1216 20:59:31.537153   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:31.537735   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:31.537795   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:31.537710   61807 retry.go:31] will retry after 4.259700282s: waiting for machine to come up
	I1216 20:59:37.140408   60933 start.go:364] duration metric: took 4m2.637362394s to acquireMachinesLock for "old-k8s-version-847766"
	I1216 20:59:37.140483   60933 start.go:96] Skipping create...Using existing machine configuration
	I1216 20:59:37.140491   60933 fix.go:54] fixHost starting: 
	I1216 20:59:37.140933   60933 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:59:37.140988   60933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:59:37.159075   60933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39873
	I1216 20:59:37.159574   60933 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:59:37.160140   60933 main.go:141] libmachine: Using API Version  1
	I1216 20:59:37.160172   60933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:59:37.160560   60933 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:59:37.160773   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:37.160889   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetState
	I1216 20:59:37.162561   60933 fix.go:112] recreateIfNeeded on old-k8s-version-847766: state=Stopped err=<nil>
	I1216 20:59:37.162603   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	W1216 20:59:37.162755   60933 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 20:59:37.166031   60933 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-847766" ...
	I1216 20:59:34.634064   60421 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.32.0: (1.973261206s)
	I1216 20:59:34.634117   60421 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.0 from cache
	I1216 20:59:34.634154   60421 cache_images.go:123] Successfully loaded all cached images
	I1216 20:59:34.634160   60421 cache_images.go:92] duration metric: took 16.064590407s to LoadCachedImages
	I1216 20:59:34.634171   60421 kubeadm.go:934] updating node { 192.168.50.240 8443 v1.32.0 crio true true} ...
	I1216 20:59:34.634331   60421 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-232338 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:no-preload-232338 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 20:59:34.634420   60421 ssh_runner.go:195] Run: crio config
	I1216 20:59:34.688034   60421 cni.go:84] Creating CNI manager for ""
	I1216 20:59:34.688059   60421 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 20:59:34.688068   60421 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 20:59:34.688093   60421 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.240 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-232338 NodeName:no-preload-232338 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 20:59:34.688277   60421 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-232338"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.240"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.240"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 20:59:34.688356   60421 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1216 20:59:34.699709   60421 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 20:59:34.699784   60421 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 20:59:34.710306   60421 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1216 20:59:34.732401   60421 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 20:59:34.757561   60421 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1216 20:59:34.776094   60421 ssh_runner.go:195] Run: grep 192.168.50.240	control-plane.minikube.internal$ /etc/hosts
	I1216 20:59:34.780341   60421 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.240	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 20:59:34.794025   60421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:59:34.930543   60421 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 20:59:34.948720   60421 certs.go:68] Setting up /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338 for IP: 192.168.50.240
	I1216 20:59:34.948752   60421 certs.go:194] generating shared ca certs ...
	I1216 20:59:34.948776   60421 certs.go:226] acquiring lock for ca certs: {Name:mk7f8f83a04be3d39897a025f51d4d8228b5a509 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:59:34.949035   60421 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key
	I1216 20:59:34.949094   60421 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key
	I1216 20:59:34.949115   60421 certs.go:256] generating profile certs ...
	I1216 20:59:34.949243   60421 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/client.key
	I1216 20:59:34.949327   60421 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/apiserver.key.674e04e3
	I1216 20:59:34.949379   60421 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/proxy-client.key
	I1216 20:59:34.949509   60421 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem (1338 bytes)
	W1216 20:59:34.949547   60421 certs.go:480] ignoring /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254_empty.pem, impossibly tiny 0 bytes
	I1216 20:59:34.949557   60421 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 20:59:34.949582   60421 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem (1082 bytes)
	I1216 20:59:34.949604   60421 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem (1123 bytes)
	I1216 20:59:34.949627   60421 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem (1679 bytes)
	I1216 20:59:34.949662   60421 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem (1708 bytes)
	I1216 20:59:34.950648   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 20:59:34.994491   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 20:59:35.029853   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 20:59:35.058834   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 20:59:35.096870   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 20:59:35.126467   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 20:59:35.160826   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 20:59:35.186344   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 20:59:35.211125   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /usr/share/ca-certificates/142542.pem (1708 bytes)
	I1216 20:59:35.238705   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 20:59:35.266485   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem --> /usr/share/ca-certificates/14254.pem (1338 bytes)
	I1216 20:59:35.291729   60421 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 20:59:35.311939   60421 ssh_runner.go:195] Run: openssl version
	I1216 20:59:35.318397   60421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142542.pem && ln -fs /usr/share/ca-certificates/142542.pem /etc/ssl/certs/142542.pem"
	I1216 20:59:35.332081   60421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142542.pem
	I1216 20:59:35.336967   60421 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 19:42 /usr/share/ca-certificates/142542.pem
	I1216 20:59:35.337022   60421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142542.pem
	I1216 20:59:35.343307   60421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142542.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 20:59:35.356515   60421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 20:59:35.370380   60421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:59:35.375538   60421 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:59:35.375589   60421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:59:35.381736   60421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 20:59:35.395677   60421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14254.pem && ln -fs /usr/share/ca-certificates/14254.pem /etc/ssl/certs/14254.pem"
	I1216 20:59:35.409029   60421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14254.pem
	I1216 20:59:35.414358   60421 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 19:42 /usr/share/ca-certificates/14254.pem
	I1216 20:59:35.414427   60421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14254.pem
	I1216 20:59:35.421352   60421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14254.pem /etc/ssl/certs/51391683.0"
	I1216 20:59:35.435322   60421 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 20:59:35.440479   60421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 20:59:35.447408   60421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 20:59:35.453992   60421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 20:59:35.460713   60421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 20:59:35.467109   60421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 20:59:35.473412   60421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 20:59:35.479720   60421 kubeadm.go:392] StartCluster: {Name:no-preload-232338 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32
.0 ClusterName:no-preload-232338 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.240 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 20:59:35.479824   60421 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 20:59:35.479901   60421 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 20:59:35.521238   60421 cri.go:89] found id: ""
	I1216 20:59:35.521331   60421 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 20:59:35.534818   60421 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1216 20:59:35.534848   60421 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1216 20:59:35.534893   60421 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 20:59:35.547460   60421 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 20:59:35.548501   60421 kubeconfig.go:125] found "no-preload-232338" server: "https://192.168.50.240:8443"
	I1216 20:59:35.550575   60421 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 20:59:35.560957   60421 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.240
	I1216 20:59:35.561018   60421 kubeadm.go:1160] stopping kube-system containers ...
	I1216 20:59:35.561033   60421 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 20:59:35.561094   60421 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 20:59:35.598970   60421 cri.go:89] found id: ""
	I1216 20:59:35.599082   60421 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 20:59:35.618027   60421 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 20:59:35.629418   60421 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 20:59:35.629455   60421 kubeadm.go:157] found existing configuration files:
	
	I1216 20:59:35.629501   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 20:59:35.639825   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 20:59:35.639896   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 20:59:35.650676   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 20:59:35.662171   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 20:59:35.662228   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 20:59:35.674780   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 20:59:35.686565   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 20:59:35.686640   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 20:59:35.698956   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 20:59:35.710813   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 20:59:35.710874   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 20:59:35.723307   60421 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 20:59:35.734712   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:35.863375   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:37.021512   60421 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.158099337s)
	I1216 20:59:37.021546   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:37.269641   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:37.348978   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:37.428210   60421 api_server.go:52] waiting for apiserver process to appear ...
	I1216 20:59:37.428296   60421 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:59:35.800344   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.800861   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Found IP for machine: 192.168.39.162
	I1216 20:59:35.800889   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has current primary IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.800899   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Reserving static IP address...
	I1216 20:59:35.801367   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-327790", mac: "52:54:00:68:47:d5", ip: "192.168.39.162"} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:35.801395   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Reserved static IP address: 192.168.39.162
	I1216 20:59:35.801419   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | skip adding static IP to network mk-default-k8s-diff-port-327790 - found existing host DHCP lease matching {name: "default-k8s-diff-port-327790", mac: "52:54:00:68:47:d5", ip: "192.168.39.162"}
	I1216 20:59:35.801439   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for SSH to be available...
	I1216 20:59:35.801452   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Getting to WaitForSSH function...
	I1216 20:59:35.803875   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.804226   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:35.804257   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.804407   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Using SSH client type: external
	I1216 20:59:35.804439   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Using SSH private key: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa (-rw-------)
	I1216 20:59:35.804472   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.162 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1216 20:59:35.804493   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | About to run SSH command:
	I1216 20:59:35.804517   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | exit 0
	I1216 20:59:35.935325   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | SSH cmd err, output: <nil>: 
	I1216 20:59:35.935765   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetConfigRaw
	I1216 20:59:35.936442   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetIP
	I1216 20:59:35.938945   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.939369   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:35.939395   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.939654   60829 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/config.json ...
	I1216 20:59:35.939915   60829 machine.go:93] provisionDockerMachine start ...
	I1216 20:59:35.939938   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:59:35.940183   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:35.942412   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.942758   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:35.942787   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.942885   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:35.943067   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:35.943205   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:35.943347   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:35.943501   60829 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:35.943687   60829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1216 20:59:35.943697   60829 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 20:59:36.060257   60829 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1216 20:59:36.060297   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetMachineName
	I1216 20:59:36.060608   60829 buildroot.go:166] provisioning hostname "default-k8s-diff-port-327790"
	I1216 20:59:36.060634   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetMachineName
	I1216 20:59:36.060853   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:36.063758   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.064060   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:36.064097   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.064222   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:36.064427   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.064600   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.064745   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:36.064910   60829 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:36.065132   60829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1216 20:59:36.065151   60829 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-327790 && echo "default-k8s-diff-port-327790" | sudo tee /etc/hostname
	I1216 20:59:36.194522   60829 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-327790
	
	I1216 20:59:36.194555   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:36.197422   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.197770   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:36.197818   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.198007   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:36.198217   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.198446   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.198606   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:36.198803   60829 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:36.199037   60829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1216 20:59:36.199062   60829 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-327790' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-327790/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-327790' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 20:59:36.320779   60829 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 20:59:36.320808   60829 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20091-7083/.minikube CaCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20091-7083/.minikube}
	I1216 20:59:36.320833   60829 buildroot.go:174] setting up certificates
	I1216 20:59:36.320845   60829 provision.go:84] configureAuth start
	I1216 20:59:36.320854   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetMachineName
	I1216 20:59:36.321171   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetIP
	I1216 20:59:36.323701   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.324019   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:36.324044   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.324254   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:36.326002   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.326317   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:36.326348   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.326478   60829 provision.go:143] copyHostCerts
	I1216 20:59:36.326555   60829 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem, removing ...
	I1216 20:59:36.326567   60829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem
	I1216 20:59:36.326635   60829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem (1082 bytes)
	I1216 20:59:36.326747   60829 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem, removing ...
	I1216 20:59:36.326759   60829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem
	I1216 20:59:36.326786   60829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem (1123 bytes)
	I1216 20:59:36.326856   60829 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem, removing ...
	I1216 20:59:36.326866   60829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem
	I1216 20:59:36.326887   60829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem (1679 bytes)
	I1216 20:59:36.326949   60829 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-327790 san=[127.0.0.1 192.168.39.162 default-k8s-diff-port-327790 localhost minikube]
	I1216 20:59:36.480215   60829 provision.go:177] copyRemoteCerts
	I1216 20:59:36.480278   60829 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 20:59:36.480304   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:36.482859   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.483213   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:36.483258   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.483500   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:36.483712   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.483903   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:36.484087   60829 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 20:59:36.571252   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 20:59:36.599399   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1216 20:59:36.624194   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 20:59:36.649294   60829 provision.go:87] duration metric: took 328.437433ms to configureAuth
	I1216 20:59:36.649325   60829 buildroot.go:189] setting minikube options for container-runtime
	I1216 20:59:36.649494   60829 config.go:182] Loaded profile config "default-k8s-diff-port-327790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 20:59:36.649567   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:36.652411   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.652838   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:36.652868   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.653006   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:36.653264   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.653490   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.653704   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:36.653879   60829 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:36.654059   60829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1216 20:59:36.654076   60829 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 20:59:36.893006   60829 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 20:59:36.893043   60829 machine.go:96] duration metric: took 953.113126ms to provisionDockerMachine
	I1216 20:59:36.893057   60829 start.go:293] postStartSetup for "default-k8s-diff-port-327790" (driver="kvm2")
	I1216 20:59:36.893070   60829 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 20:59:36.893101   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:59:36.893466   60829 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 20:59:36.893494   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:36.896151   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.896531   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:36.896561   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.896683   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:36.896893   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.897100   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:36.897280   60829 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 20:59:36.982077   60829 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 20:59:36.986598   60829 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 20:59:36.986624   60829 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/addons for local assets ...
	I1216 20:59:36.986702   60829 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/files for local assets ...
	I1216 20:59:36.986795   60829 filesync.go:149] local asset: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem -> 142542.pem in /etc/ssl/certs
	I1216 20:59:36.986919   60829 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 20:59:36.996453   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /etc/ssl/certs/142542.pem (1708 bytes)
	I1216 20:59:37.021838   60829 start.go:296] duration metric: took 128.770799ms for postStartSetup
	I1216 20:59:37.021873   60829 fix.go:56] duration metric: took 19.961410312s for fixHost
	I1216 20:59:37.021896   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:37.024668   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.025171   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:37.025207   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.025369   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:37.025591   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:37.025746   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:37.025892   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:37.026040   60829 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:37.026257   60829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1216 20:59:37.026273   60829 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 20:59:37.140228   60829 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734382777.110726967
	
	I1216 20:59:37.140254   60829 fix.go:216] guest clock: 1734382777.110726967
	I1216 20:59:37.140264   60829 fix.go:229] Guest: 2024-12-16 20:59:37.110726967 +0000 UTC Remote: 2024-12-16 20:59:37.021877328 +0000 UTC m=+246.706572335 (delta=88.849639ms)
	I1216 20:59:37.140308   60829 fix.go:200] guest clock delta is within tolerance: 88.849639ms
	I1216 20:59:37.140315   60829 start.go:83] releasing machines lock for "default-k8s-diff-port-327790", held for 20.079880217s
	I1216 20:59:37.140347   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:59:37.140632   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetIP
	I1216 20:59:37.143268   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.143748   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:37.143775   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.143983   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:59:37.144601   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:59:37.144789   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:59:37.144883   60829 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 20:59:37.144930   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:37.145028   60829 ssh_runner.go:195] Run: cat /version.json
	I1216 20:59:37.145060   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:37.147817   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.148192   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:37.148219   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.148315   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.148364   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:37.148576   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:37.148755   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:37.148776   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.148804   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:37.148964   60829 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 20:59:37.149020   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:37.149141   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:37.149285   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:37.149439   60829 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 20:59:37.232354   60829 ssh_runner.go:195] Run: systemctl --version
	I1216 20:59:37.261803   60829 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 20:59:37.416094   60829 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 20:59:37.425458   60829 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 20:59:37.425566   60829 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 20:59:37.448873   60829 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 20:59:37.448914   60829 start.go:495] detecting cgroup driver to use...
	I1216 20:59:37.449014   60829 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 20:59:37.472474   60829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 20:59:37.492445   60829 docker.go:217] disabling cri-docker service (if available) ...
	I1216 20:59:37.492518   60829 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 20:59:37.510478   60829 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 20:59:37.525452   60829 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 20:59:37.642105   60829 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 20:59:37.814506   60829 docker.go:233] disabling docker service ...
	I1216 20:59:37.814590   60829 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 20:59:37.829046   60829 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 20:59:37.845049   60829 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 20:59:38.009931   60829 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 20:59:38.158000   60829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 20:59:38.174376   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 20:59:38.197489   60829 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1216 20:59:38.197555   60829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:38.213974   60829 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 20:59:38.214034   60829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:38.230383   60829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:38.244599   60829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:38.257574   60829 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 20:59:38.273377   60829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:38.285854   60829 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:38.312687   60829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:38.329105   60829 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 20:59:38.343596   60829 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 20:59:38.343679   60829 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 20:59:38.362530   60829 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 20:59:38.374384   60829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:59:38.564793   60829 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 20:59:38.682792   60829 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 20:59:38.682873   60829 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 20:59:38.689164   60829 start.go:563] Will wait 60s for crictl version
	I1216 20:59:38.689251   60829 ssh_runner.go:195] Run: which crictl
	I1216 20:59:38.693994   60829 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 20:59:38.746808   60829 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 20:59:38.746913   60829 ssh_runner.go:195] Run: crio --version
	I1216 20:59:38.788490   60829 ssh_runner.go:195] Run: crio --version
	I1216 20:59:38.823957   60829 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1216 20:59:37.167470   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .Start
	I1216 20:59:37.167715   60933 main.go:141] libmachine: (old-k8s-version-847766) Ensuring networks are active...
	I1216 20:59:37.168626   60933 main.go:141] libmachine: (old-k8s-version-847766) Ensuring network default is active
	I1216 20:59:37.169114   60933 main.go:141] libmachine: (old-k8s-version-847766) Ensuring network mk-old-k8s-version-847766 is active
	I1216 20:59:37.169670   60933 main.go:141] libmachine: (old-k8s-version-847766) Getting domain xml...
	I1216 20:59:37.170345   60933 main.go:141] libmachine: (old-k8s-version-847766) Creating domain...
	I1216 20:59:38.535579   60933 main.go:141] libmachine: (old-k8s-version-847766) Waiting to get IP...
	I1216 20:59:38.536661   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:38.537089   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:38.537174   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:38.537078   61973 retry.go:31] will retry after 277.62307ms: waiting for machine to come up
	I1216 20:59:38.816788   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:38.817329   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:38.817360   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:38.817272   61973 retry.go:31] will retry after 346.694382ms: waiting for machine to come up
	I1216 20:59:39.165778   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:39.166377   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:39.166436   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:39.166355   61973 retry.go:31] will retry after 416.599295ms: waiting for machine to come up
	I1216 20:59:38.825413   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetIP
	I1216 20:59:38.828442   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:38.828836   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:38.828870   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:38.829125   60829 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1216 20:59:38.833715   60829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 20:59:38.848989   60829 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-327790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.32.0 ClusterName:default-k8s-diff-port-327790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 20:59:38.849121   60829 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 20:59:38.849169   60829 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 20:59:38.891356   60829 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1216 20:59:38.891432   60829 ssh_runner.go:195] Run: which lz4
	I1216 20:59:38.896669   60829 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 20:59:38.901209   60829 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 20:59:38.901253   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1216 20:59:37.928929   60421 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:59:38.428939   60421 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:59:38.454184   60421 api_server.go:72] duration metric: took 1.02597754s to wait for apiserver process to appear ...
	I1216 20:59:38.454211   60421 api_server.go:88] waiting for apiserver healthz status ...
	I1216 20:59:38.454252   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:38.454842   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": dial tcp 192.168.50.240:8443: connect: connection refused
	I1216 20:59:38.954378   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:39.585259   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:39.585762   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:39.585791   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:39.585737   61973 retry.go:31] will retry after 526.969594ms: waiting for machine to come up
	I1216 20:59:40.114653   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:40.115175   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:40.115205   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:40.115140   61973 retry.go:31] will retry after 502.283372ms: waiting for machine to come up
	I1216 20:59:40.619067   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:40.619633   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:40.619682   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:40.619571   61973 retry.go:31] will retry after 764.799982ms: waiting for machine to come up
	I1216 20:59:41.385515   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:41.386066   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:41.386100   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:41.386027   61973 retry.go:31] will retry after 982.237202ms: waiting for machine to come up
	I1216 20:59:42.369934   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:42.370414   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:42.370449   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:42.370373   61973 retry.go:31] will retry after 1.163280736s: waiting for machine to come up
	I1216 20:59:43.534829   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:43.535194   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:43.535224   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:43.535143   61973 retry.go:31] will retry after 1.630958514s: waiting for machine to come up
	I1216 20:59:40.539994   60829 crio.go:462] duration metric: took 1.643361409s to copy over tarball
	I1216 20:59:40.540066   60829 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 20:59:42.840346   60829 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.30025199s)
	I1216 20:59:42.840382   60829 crio.go:469] duration metric: took 2.300357568s to extract the tarball
	I1216 20:59:42.840392   60829 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 20:59:42.881650   60829 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 20:59:42.928089   60829 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 20:59:42.928120   60829 cache_images.go:84] Images are preloaded, skipping loading
	I1216 20:59:42.928129   60829 kubeadm.go:934] updating node { 192.168.39.162 8444 v1.32.0 crio true true} ...
	I1216 20:59:42.928222   60829 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-327790 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:default-k8s-diff-port-327790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 20:59:42.928286   60829 ssh_runner.go:195] Run: crio config
	I1216 20:59:42.983315   60829 cni.go:84] Creating CNI manager for ""
	I1216 20:59:42.983348   60829 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 20:59:42.983360   60829 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 20:59:42.983396   60829 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.162 APIServerPort:8444 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-327790 NodeName:default-k8s-diff-port-327790 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.162"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.162 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 20:59:42.983556   60829 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.162
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-327790"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.162"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.162"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 20:59:42.983631   60829 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1216 20:59:42.996192   60829 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 20:59:42.996283   60829 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 20:59:43.008389   60829 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1216 20:59:43.027984   60829 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 20:59:43.045672   60829 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1216 20:59:43.063620   60829 ssh_runner.go:195] Run: grep 192.168.39.162	control-plane.minikube.internal$ /etc/hosts
	I1216 20:59:43.067925   60829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.162	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 20:59:43.082946   60829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:59:43.220929   60829 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 20:59:43.243843   60829 certs.go:68] Setting up /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790 for IP: 192.168.39.162
	I1216 20:59:43.243870   60829 certs.go:194] generating shared ca certs ...
	I1216 20:59:43.243888   60829 certs.go:226] acquiring lock for ca certs: {Name:mk7f8f83a04be3d39897a025f51d4d8228b5a509 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:59:43.244125   60829 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key
	I1216 20:59:43.244185   60829 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key
	I1216 20:59:43.244200   60829 certs.go:256] generating profile certs ...
	I1216 20:59:43.244324   60829 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/client.key
	I1216 20:59:43.244400   60829 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/apiserver.key.0f0bf709
	I1216 20:59:43.244449   60829 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/proxy-client.key
	I1216 20:59:43.244606   60829 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem (1338 bytes)
	W1216 20:59:43.244649   60829 certs.go:480] ignoring /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254_empty.pem, impossibly tiny 0 bytes
	I1216 20:59:43.244666   60829 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 20:59:43.244689   60829 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem (1082 bytes)
	I1216 20:59:43.244711   60829 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem (1123 bytes)
	I1216 20:59:43.244731   60829 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem (1679 bytes)
	I1216 20:59:43.244776   60829 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem (1708 bytes)
	I1216 20:59:43.245449   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 20:59:43.283598   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 20:59:43.309321   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 20:59:43.343071   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 20:59:43.379763   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1216 20:59:43.409794   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 20:59:43.437074   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 20:59:43.462616   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 20:59:43.487711   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 20:59:43.512636   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem --> /usr/share/ca-certificates/14254.pem (1338 bytes)
	I1216 20:59:43.539050   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /usr/share/ca-certificates/142542.pem (1708 bytes)
	I1216 20:59:43.566507   60829 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 20:59:43.584425   60829 ssh_runner.go:195] Run: openssl version
	I1216 20:59:43.590996   60829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 20:59:43.604384   60829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:59:43.609342   60829 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:59:43.609404   60829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:59:43.615902   60829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 20:59:43.627432   60829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14254.pem && ln -fs /usr/share/ca-certificates/14254.pem /etc/ssl/certs/14254.pem"
	I1216 20:59:43.638929   60829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14254.pem
	I1216 20:59:43.644189   60829 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 19:42 /usr/share/ca-certificates/14254.pem
	I1216 20:59:43.644267   60829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14254.pem
	I1216 20:59:43.650550   60829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14254.pem /etc/ssl/certs/51391683.0"
	I1216 20:59:43.662678   60829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142542.pem && ln -fs /usr/share/ca-certificates/142542.pem /etc/ssl/certs/142542.pem"
	I1216 20:59:43.674981   60829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142542.pem
	I1216 20:59:43.680022   60829 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 19:42 /usr/share/ca-certificates/142542.pem
	I1216 20:59:43.680113   60829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142542.pem
	I1216 20:59:43.686159   60829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142542.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 20:59:43.697897   60829 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 20:59:43.702835   60829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 20:59:43.709262   60829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 20:59:43.716370   60829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 20:59:43.725031   60829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 20:59:43.732876   60829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 20:59:43.739810   60829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 20:59:43.746998   60829 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-327790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.32.0 ClusterName:default-k8s-diff-port-327790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 20:59:43.747131   60829 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 20:59:43.747189   60829 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 20:59:43.791895   60829 cri.go:89] found id: ""
	I1216 20:59:43.791979   60829 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 20:59:43.802858   60829 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1216 20:59:43.802886   60829 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1216 20:59:43.802943   60829 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 20:59:43.813313   60829 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 20:59:43.814296   60829 kubeconfig.go:125] found "default-k8s-diff-port-327790" server: "https://192.168.39.162:8444"
	I1216 20:59:43.816374   60829 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 20:59:43.825834   60829 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.162
	I1216 20:59:43.825871   60829 kubeadm.go:1160] stopping kube-system containers ...
	I1216 20:59:43.825884   60829 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 20:59:43.825934   60829 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 20:59:43.870890   60829 cri.go:89] found id: ""
	I1216 20:59:43.870965   60829 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 20:59:43.888155   60829 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 20:59:43.898356   60829 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 20:59:43.898381   60829 kubeadm.go:157] found existing configuration files:
	
	I1216 20:59:43.898445   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1216 20:59:43.908232   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 20:59:43.908310   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 20:59:43.918637   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1216 20:59:43.928255   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 20:59:43.928343   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 20:59:43.938479   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1216 20:59:43.948085   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 20:59:43.948157   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 20:59:43.959080   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1216 20:59:43.969218   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 20:59:43.969275   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 20:59:43.980063   60829 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 20:59:43.990768   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:44.125741   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:44.845177   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:45.049512   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:45.162055   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:45.284927   60829 api_server.go:52] waiting for apiserver process to appear ...
	I1216 20:59:45.285036   60829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:59:43.954985   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 20:59:43.955087   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:45.168144   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:45.168719   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:45.168750   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:45.168671   61973 retry.go:31] will retry after 1.835631107s: waiting for machine to come up
	I1216 20:59:47.005854   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:47.006380   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:47.006422   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:47.006339   61973 retry.go:31] will retry after 1.943800898s: waiting for machine to come up
	I1216 20:59:48.951552   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:48.952050   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:48.952114   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:48.952008   61973 retry.go:31] will retry after 2.949898251s: waiting for machine to come up
	I1216 20:59:45.785964   60829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:59:46.285989   60829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:59:46.339555   60829 api_server.go:72] duration metric: took 1.054628295s to wait for apiserver process to appear ...
	I1216 20:59:46.339597   60829 api_server.go:88] waiting for apiserver healthz status ...
	I1216 20:59:46.339636   60829 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1216 20:59:46.340197   60829 api_server.go:269] stopped: https://192.168.39.162:8444/healthz: Get "https://192.168.39.162:8444/healthz": dial tcp 192.168.39.162:8444: connect: connection refused
	I1216 20:59:46.839771   60829 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1216 20:59:49.461907   60829 api_server.go:279] https://192.168.39.162:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 20:59:49.461943   60829 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 20:59:49.461958   60829 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1216 20:59:49.513069   60829 api_server.go:279] https://192.168.39.162:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 20:59:49.513121   60829 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 20:59:49.840517   60829 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1216 20:59:49.846051   60829 api_server.go:279] https://192.168.39.162:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 20:59:49.846086   60829 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 20:59:50.339824   60829 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1216 20:59:50.347663   60829 api_server.go:279] https://192.168.39.162:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 20:59:50.347708   60829 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 20:59:50.840385   60829 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1216 20:59:50.844943   60829 api_server.go:279] https://192.168.39.162:8444/healthz returned 200:
	ok
	I1216 20:59:50.854518   60829 api_server.go:141] control plane version: v1.32.0
	I1216 20:59:50.854546   60829 api_server.go:131] duration metric: took 4.514941385s to wait for apiserver health ...
	I1216 20:59:50.854554   60829 cni.go:84] Creating CNI manager for ""
	I1216 20:59:50.854560   60829 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 20:59:50.856538   60829 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 20:59:48.956352   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 20:59:48.956414   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:51.905108   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:51.905560   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:51.905594   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:51.905505   61973 retry.go:31] will retry after 3.44069953s: waiting for machine to come up
	I1216 20:59:50.858169   60829 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 20:59:50.882809   60829 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 20:59:50.912787   60829 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 20:59:50.933650   60829 system_pods.go:59] 8 kube-system pods found
	I1216 20:59:50.933693   60829 system_pods.go:61] "coredns-668d6bf9bc-tqh9s" [56b4db37-b6bc-49eb-b45f-b8b4d1f16eed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 20:59:50.933705   60829 system_pods.go:61] "etcd-default-k8s-diff-port-327790" [067f7c41-3763-42d3-af06-ad50fad3d206] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 20:59:50.933713   60829 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-327790" [f1964b5b-9d2b-4f82-afc6-2f359c9b8827] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 20:59:50.933722   60829 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-327790" [fd7479e3-be26-4bb0-b53a-e40766a33996] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 20:59:50.933742   60829 system_pods.go:61] "kube-proxy-mplxr" [027abdc5-7022-4528-a93f-36f3b10115ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 20:59:50.933751   60829 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-327790" [d7416a53-ccb4-46fd-9992-46cbf7ec0a3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 20:59:50.933763   60829 system_pods.go:61] "metrics-server-f79f97bbb-hlt7s" [d42906e3-387c-493e-9d06-5bb654dc9784] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 20:59:50.933772   60829 system_pods.go:61] "storage-provisioner" [c774635a-faca-4a1a-8f4e-2161447ebaa1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 20:59:50.933785   60829 system_pods.go:74] duration metric: took 20.968988ms to wait for pod list to return data ...
	I1216 20:59:50.933804   60829 node_conditions.go:102] verifying NodePressure condition ...
	I1216 20:59:50.937958   60829 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 20:59:50.937986   60829 node_conditions.go:123] node cpu capacity is 2
	I1216 20:59:50.938008   60829 node_conditions.go:105] duration metric: took 4.196302ms to run NodePressure ...
	I1216 20:59:50.938030   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:51.231412   60829 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1216 20:59:51.236005   60829 kubeadm.go:739] kubelet initialised
	I1216 20:59:51.236029   60829 kubeadm.go:740] duration metric: took 4.585977ms waiting for restarted kubelet to initialise ...
	I1216 20:59:51.236042   60829 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 20:59:51.243608   60829 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-tqh9s" in "kube-system" namespace to be "Ready" ...
	I1216 20:59:53.250907   60829 pod_ready.go:103] pod "coredns-668d6bf9bc-tqh9s" in "kube-system" namespace has status "Ready":"False"
	I1216 20:59:56.696377   60215 start.go:364] duration metric: took 54.44579772s to acquireMachinesLock for "embed-certs-606219"
	I1216 20:59:56.696450   60215 start.go:96] Skipping create...Using existing machine configuration
	I1216 20:59:56.696470   60215 fix.go:54] fixHost starting: 
	I1216 20:59:56.696862   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:59:56.696902   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:59:56.714627   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42069
	I1216 20:59:56.715074   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:59:56.715599   60215 main.go:141] libmachine: Using API Version  1
	I1216 20:59:56.715629   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:59:56.715953   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:59:56.716116   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 20:59:56.716252   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetState
	I1216 20:59:56.717876   60215 fix.go:112] recreateIfNeeded on embed-certs-606219: state=Stopped err=<nil>
	I1216 20:59:56.717902   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	W1216 20:59:56.718088   60215 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 20:59:56.720072   60215 out.go:177] * Restarting existing kvm2 VM for "embed-certs-606219" ...
	I1216 20:59:53.957328   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 20:59:53.957395   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:55.349557   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.350105   60933 main.go:141] libmachine: (old-k8s-version-847766) Found IP for machine: 192.168.72.240
	I1216 20:59:55.350129   60933 main.go:141] libmachine: (old-k8s-version-847766) Reserving static IP address...
	I1216 20:59:55.350140   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has current primary IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.350574   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "old-k8s-version-847766", mac: "52:54:00:c4:f2:8a", ip: "192.168.72.240"} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.350608   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | skip adding static IP to network mk-old-k8s-version-847766 - found existing host DHCP lease matching {name: "old-k8s-version-847766", mac: "52:54:00:c4:f2:8a", ip: "192.168.72.240"}
	I1216 20:59:55.350623   60933 main.go:141] libmachine: (old-k8s-version-847766) Reserved static IP address: 192.168.72.240
	I1216 20:59:55.350646   60933 main.go:141] libmachine: (old-k8s-version-847766) Waiting for SSH to be available...
	I1216 20:59:55.350662   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | Getting to WaitForSSH function...
	I1216 20:59:55.353011   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.353346   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.353369   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.353535   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | Using SSH client type: external
	I1216 20:59:55.353560   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | Using SSH private key: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa (-rw-------)
	I1216 20:59:55.353592   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1216 20:59:55.353606   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | About to run SSH command:
	I1216 20:59:55.353621   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | exit 0
	I1216 20:59:55.480726   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | SSH cmd err, output: <nil>: 
	I1216 20:59:55.481062   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetConfigRaw
	I1216 20:59:55.481692   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetIP
	I1216 20:59:55.484113   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.484500   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.484537   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.484769   60933 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/config.json ...
	I1216 20:59:55.484985   60933 machine.go:93] provisionDockerMachine start ...
	I1216 20:59:55.485008   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:55.485220   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:55.487511   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.487835   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.487862   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.487958   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:55.488134   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:55.488268   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:55.488405   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:55.488546   60933 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:55.488725   60933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I1216 20:59:55.488735   60933 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 20:59:55.596092   60933 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1216 20:59:55.596127   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetMachineName
	I1216 20:59:55.596401   60933 buildroot.go:166] provisioning hostname "old-k8s-version-847766"
	I1216 20:59:55.596426   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetMachineName
	I1216 20:59:55.596644   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:55.599286   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.599631   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.599662   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.599814   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:55.600010   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:55.600166   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:55.600299   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:55.600462   60933 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:55.600665   60933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I1216 20:59:55.600678   60933 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-847766 && echo "old-k8s-version-847766" | sudo tee /etc/hostname
	I1216 20:59:55.731851   60933 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-847766
	
	I1216 20:59:55.731879   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:55.734802   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.735155   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.735186   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.735422   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:55.735650   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:55.735815   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:55.736030   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:55.736194   60933 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:55.736377   60933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I1216 20:59:55.736393   60933 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-847766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-847766/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-847766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 20:59:55.857050   60933 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 20:59:55.857108   60933 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20091-7083/.minikube CaCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20091-7083/.minikube}
	I1216 20:59:55.857138   60933 buildroot.go:174] setting up certificates
	I1216 20:59:55.857163   60933 provision.go:84] configureAuth start
	I1216 20:59:55.857180   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetMachineName
	I1216 20:59:55.857505   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetIP
	I1216 20:59:55.860286   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.860613   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.860643   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.860826   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:55.863292   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.863682   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.863709   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.863871   60933 provision.go:143] copyHostCerts
	I1216 20:59:55.863920   60933 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem, removing ...
	I1216 20:59:55.863929   60933 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem
	I1216 20:59:55.863986   60933 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem (1679 bytes)
	I1216 20:59:55.864069   60933 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem, removing ...
	I1216 20:59:55.864077   60933 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem
	I1216 20:59:55.864104   60933 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem (1082 bytes)
	I1216 20:59:55.864159   60933 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem, removing ...
	I1216 20:59:55.864177   60933 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem
	I1216 20:59:55.864202   60933 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem (1123 bytes)
	I1216 20:59:55.864250   60933 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-847766 san=[127.0.0.1 192.168.72.240 localhost minikube old-k8s-version-847766]
	I1216 20:59:56.058548   60933 provision.go:177] copyRemoteCerts
	I1216 20:59:56.058603   60933 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 20:59:56.058638   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:56.061354   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.061666   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.061707   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.061838   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:56.062039   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.062200   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:56.062356   60933 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa Username:docker}
	I1216 20:59:56.146788   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 20:59:56.172789   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1216 20:59:56.198040   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 20:59:56.222476   60933 provision.go:87] duration metric: took 365.299433ms to configureAuth
	I1216 20:59:56.222505   60933 buildroot.go:189] setting minikube options for container-runtime
	I1216 20:59:56.222706   60933 config.go:182] Loaded profile config "old-k8s-version-847766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1216 20:59:56.222790   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:56.225376   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.225752   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.225779   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.225965   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:56.226182   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.226363   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.226516   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:56.226687   60933 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:56.226887   60933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I1216 20:59:56.226906   60933 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 20:59:56.451434   60933 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 20:59:56.451464   60933 machine.go:96] duration metric: took 966.463181ms to provisionDockerMachine
	I1216 20:59:56.451478   60933 start.go:293] postStartSetup for "old-k8s-version-847766" (driver="kvm2")
	I1216 20:59:56.451513   60933 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 20:59:56.451541   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:56.451926   60933 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 20:59:56.451980   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:56.454840   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.455302   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.455331   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.455454   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:56.455661   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.455814   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:56.455988   60933 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa Username:docker}
	I1216 20:59:56.542904   60933 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 20:59:56.547362   60933 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 20:59:56.547389   60933 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/addons for local assets ...
	I1216 20:59:56.547467   60933 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/files for local assets ...
	I1216 20:59:56.547568   60933 filesync.go:149] local asset: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem -> 142542.pem in /etc/ssl/certs
	I1216 20:59:56.547677   60933 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 20:59:56.557902   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /etc/ssl/certs/142542.pem (1708 bytes)
	I1216 20:59:56.582796   60933 start.go:296] duration metric: took 131.303406ms for postStartSetup
	I1216 20:59:56.582846   60933 fix.go:56] duration metric: took 19.442354832s for fixHost
	I1216 20:59:56.582872   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:56.585478   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.585803   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.585831   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.586011   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:56.586194   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.586358   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.586472   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:56.586640   60933 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:56.586809   60933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I1216 20:59:56.586819   60933 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 20:59:56.696254   60933 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734382796.650794736
	
	I1216 20:59:56.696274   60933 fix.go:216] guest clock: 1734382796.650794736
	I1216 20:59:56.696281   60933 fix.go:229] Guest: 2024-12-16 20:59:56.650794736 +0000 UTC Remote: 2024-12-16 20:59:56.582851742 +0000 UTC m=+262.230512454 (delta=67.942994ms)
	I1216 20:59:56.696299   60933 fix.go:200] guest clock delta is within tolerance: 67.942994ms
	I1216 20:59:56.696304   60933 start.go:83] releasing machines lock for "old-k8s-version-847766", held for 19.555844424s
	I1216 20:59:56.696333   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:56.696608   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetIP
	I1216 20:59:56.699486   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.699932   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.699964   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.700068   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:56.700645   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:56.700846   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:56.700948   60933 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 20:59:56.701007   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:56.701115   60933 ssh_runner.go:195] Run: cat /version.json
	I1216 20:59:56.701140   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:56.703937   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.704117   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.704314   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.704342   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.704496   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:56.704567   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.704601   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.704680   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.704762   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:56.704836   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:56.704982   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.704987   60933 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa Username:docker}
	I1216 20:59:56.705134   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:56.705259   60933 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa Username:docker}
	I1216 20:59:56.784295   60933 ssh_runner.go:195] Run: systemctl --version
	I1216 20:59:56.817481   60933 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 20:59:56.968124   60933 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 20:59:56.979827   60933 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 20:59:56.979892   60933 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 20:59:56.997867   60933 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 20:59:56.997891   60933 start.go:495] detecting cgroup driver to use...
	I1216 20:59:56.997954   60933 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 20:59:57.016064   60933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 20:59:57.031596   60933 docker.go:217] disabling cri-docker service (if available) ...
	I1216 20:59:57.031665   60933 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 20:59:57.047562   60933 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 20:59:57.062737   60933 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 20:59:57.183918   60933 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 20:59:57.354699   60933 docker.go:233] disabling docker service ...
	I1216 20:59:57.354794   60933 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 20:59:57.373311   60933 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 20:59:57.390014   60933 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 20:59:57.523623   60933 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 20:59:57.656261   60933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 20:59:57.671374   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 20:59:57.692647   60933 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1216 20:59:57.692709   60933 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:57.704496   60933 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 20:59:57.704548   60933 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:57.715848   60933 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:57.727022   60933 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:57.738899   60933 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 20:59:57.756457   60933 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 20:59:57.773236   60933 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 20:59:57.773289   60933 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 20:59:57.789209   60933 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 20:59:57.800881   60933 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:59:57.927794   60933 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 20:59:58.038173   60933 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 20:59:58.038256   60933 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 20:59:58.044633   60933 start.go:563] Will wait 60s for crictl version
	I1216 20:59:58.044705   60933 ssh_runner.go:195] Run: which crictl
	I1216 20:59:58.048781   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 20:59:58.088449   60933 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 20:59:58.088579   60933 ssh_runner.go:195] Run: crio --version
	I1216 20:59:58.119211   60933 ssh_runner.go:195] Run: crio --version
	I1216 20:59:58.151411   60933 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1216 20:59:58.152582   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetIP
	I1216 20:59:58.155196   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:58.155558   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:58.155587   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:58.155763   60933 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1216 20:59:58.160369   60933 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 20:59:58.174013   60933 kubeadm.go:883] updating cluster {Name:old-k8s-version-847766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-847766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 20:59:58.174155   60933 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1216 20:59:58.174212   60933 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 20:59:58.226674   60933 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1216 20:59:58.226747   60933 ssh_runner.go:195] Run: which lz4
	I1216 20:59:58.231330   60933 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 20:59:58.236178   60933 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 20:59:58.236214   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1216 20:59:56.721746   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Start
	I1216 20:59:56.721946   60215 main.go:141] libmachine: (embed-certs-606219) Ensuring networks are active...
	I1216 20:59:56.722810   60215 main.go:141] libmachine: (embed-certs-606219) Ensuring network default is active
	I1216 20:59:56.723209   60215 main.go:141] libmachine: (embed-certs-606219) Ensuring network mk-embed-certs-606219 is active
	I1216 20:59:56.723644   60215 main.go:141] libmachine: (embed-certs-606219) Getting domain xml...
	I1216 20:59:56.724387   60215 main.go:141] libmachine: (embed-certs-606219) Creating domain...
	I1216 20:59:58.005906   60215 main.go:141] libmachine: (embed-certs-606219) Waiting to get IP...
	I1216 20:59:58.006646   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 20:59:58.007021   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 20:59:58.007136   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 20:59:58.007017   62108 retry.go:31] will retry after 280.124694ms: waiting for machine to come up
	I1216 20:59:58.288552   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 20:59:58.289049   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 20:59:58.289078   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 20:59:58.289013   62108 retry.go:31] will retry after 299.873899ms: waiting for machine to come up
	I1216 20:59:58.590757   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 20:59:58.591593   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 20:59:58.591625   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 20:59:58.591487   62108 retry.go:31] will retry after 486.884982ms: waiting for machine to come up
	I1216 20:59:59.079996   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 20:59:59.080618   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 20:59:59.080649   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 20:59:59.080581   62108 retry.go:31] will retry after 608.856993ms: waiting for machine to come up
	I1216 20:59:59.691549   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 20:59:59.692107   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 20:59:59.692139   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 20:59:59.692064   62108 retry.go:31] will retry after 730.774006ms: waiting for machine to come up
	I1216 20:59:55.752607   60829 pod_ready.go:103] pod "coredns-668d6bf9bc-tqh9s" in "kube-system" namespace has status "Ready":"False"
	I1216 20:59:58.251902   60829 pod_ready.go:103] pod "coredns-668d6bf9bc-tqh9s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:00.254126   60829 pod_ready.go:103] pod "coredns-668d6bf9bc-tqh9s" in "kube-system" namespace has status "Ready":"False"
	I1216 20:59:58.958114   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 20:59:58.958161   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:59.567722   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": read tcp 192.168.50.1:38738->192.168.50.240:8443: read: connection reset by peer
	I1216 20:59:59.567773   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:59.568271   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": dial tcp 192.168.50.240:8443: connect: connection refused
	I1216 20:59:59.954745   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:59.955447   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": dial tcp 192.168.50.240:8443: connect: connection refused
	I1216 21:00:00.455116   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:00.456036   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": dial tcp 192.168.50.240:8443: connect: connection refused
	I1216 21:00:00.954418   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:00.100507   60933 crio.go:462] duration metric: took 1.869217257s to copy over tarball
	I1216 21:00:00.100619   60933 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 21:00:03.581430   60933 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.480755636s)
	I1216 21:00:03.581469   60933 crio.go:469] duration metric: took 3.480924144s to extract the tarball
	I1216 21:00:03.581478   60933 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 21:00:03.627932   60933 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 21:00:03.667985   60933 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1216 21:00:03.668013   60933 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1216 21:00:03.668078   60933 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 21:00:03.668110   60933 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:03.668207   60933 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:03.668262   60933 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1216 21:00:03.668262   60933 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:03.668332   60933 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1216 21:00:03.668215   60933 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:03.668092   60933 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:03.670096   60933 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:03.670294   60933 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:03.670305   60933 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:03.670305   60933 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:03.670333   60933 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1216 21:00:03.670394   60933 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1216 21:00:03.670396   60933 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 21:00:03.670467   60933 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:03.861573   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1216 21:00:03.869704   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:03.885911   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:03.904748   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1216 21:00:03.905328   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:03.906138   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:03.936548   60933 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1216 21:00:03.936658   60933 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1216 21:00:03.936736   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.019039   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:04.033811   60933 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1216 21:00:04.033863   60933 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:04.033927   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.082946   60933 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1216 21:00:04.082995   60933 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:04.083008   60933 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1216 21:00:04.083050   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.083055   60933 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1216 21:00:04.083063   60933 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1216 21:00:04.083073   60933 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:04.083133   60933 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1216 21:00:04.083203   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1216 21:00:04.083205   60933 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:04.083306   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.083145   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.083139   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.123434   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:04.123702   60933 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1216 21:00:04.123740   60933 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:04.123786   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.150878   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 21:00:04.155586   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:04.155774   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:04.155877   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1216 21:00:04.155968   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:04.156205   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1216 21:00:04.226110   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:04.226429   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:00.424272   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:00.424766   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:00.424795   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:00.424712   62108 retry.go:31] will retry after 947.177724ms: waiting for machine to come up
	I1216 21:00:01.373798   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:01.374448   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:01.374486   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:01.374376   62108 retry.go:31] will retry after 755.735247ms: waiting for machine to come up
	I1216 21:00:02.132092   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:02.132690   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:02.132716   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:02.132636   62108 retry.go:31] will retry after 1.25933291s: waiting for machine to come up
	I1216 21:00:03.393390   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:03.393951   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:03.393987   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:03.393887   62108 retry.go:31] will retry after 1.654271195s: waiting for machine to come up
	I1216 21:00:00.768561   60829 pod_ready.go:93] pod "coredns-668d6bf9bc-tqh9s" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:00.768603   60829 pod_ready.go:82] duration metric: took 9.524968022s for pod "coredns-668d6bf9bc-tqh9s" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:00.768619   60829 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:02.778467   60829 pod_ready.go:93] pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:02.778507   60829 pod_ready.go:82] duration metric: took 2.009878604s for pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:02.778523   60829 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:03.290454   60829 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:03.290490   60829 pod_ready.go:82] duration metric: took 511.956426ms for pod "kube-apiserver-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:03.290505   60829 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:04.533609   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 21:00:04.533639   60421 api_server.go:103] status: https://192.168.50.240:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 21:00:04.533655   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:04.679801   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 21:00:04.679836   60421 api_server.go:103] status: https://192.168.50.240:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 21:00:04.955306   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:05.723827   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 21:00:05.723870   60421 api_server.go:103] status: https://192.168.50.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 21:00:05.723892   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:05.750638   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 21:00:05.750674   60421 api_server.go:103] status: https://192.168.50.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 21:00:05.955092   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:05.983280   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 21:00:05.983332   60421 api_server.go:103] status: https://192.168.50.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 21:00:06.454742   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:06.467886   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 21:00:06.467924   60421 api_server.go:103] status: https://192.168.50.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 21:00:06.954428   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:06.960039   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 200:
	ok
	I1216 21:00:06.969187   60421 api_server.go:141] control plane version: v1.32.0
	I1216 21:00:06.969231   60421 api_server.go:131] duration metric: took 28.515011952s to wait for apiserver health ...
	I1216 21:00:06.969242   60421 cni.go:84] Creating CNI manager for ""
	I1216 21:00:06.969249   60421 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:00:06.971475   60421 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 21:00:06.973035   60421 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 21:00:06.992348   60421 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 21:00:07.020819   60421 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 21:00:07.035254   60421 system_pods.go:59] 8 kube-system pods found
	I1216 21:00:07.035308   60421 system_pods.go:61] "coredns-668d6bf9bc-snhjf" [c0cf42c8-521a-4d02-9d43-ff7a700b0eca] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:00:07.035321   60421 system_pods.go:61] "etcd-no-preload-232338" [01ca2051-5953-44fd-bfff-40aa16ec7aca] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 21:00:07.035335   60421 system_pods.go:61] "kube-apiserver-no-preload-232338" [f1fbbb3b-a0e5-4200-89ef-67085e51a31d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 21:00:07.035359   60421 system_pods.go:61] "kube-controller-manager-no-preload-232338" [200039ad-1a2c-4dc4-8307-d8c882d69f1b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 21:00:07.035373   60421 system_pods.go:61] "kube-proxy-5mw2b" [8fbddf14-8697-451a-a3c7-873fdd437247] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 21:00:07.035382   60421 system_pods.go:61] "kube-scheduler-no-preload-232338" [1b9a7a43-59fc-44ba-9863-04fb90e6554f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 21:00:07.035396   60421 system_pods.go:61] "metrics-server-f79f97bbb-5xf67" [447144e5-11d8-48f7-b2fd-7ab9fb3c04de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:00:07.035409   60421 system_pods.go:61] "storage-provisioner" [fb293bd2-f5be-4086-b821-ffd7df58dd5e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 21:00:07.035420   60421 system_pods.go:74] duration metric: took 14.571089ms to wait for pod list to return data ...
	I1216 21:00:07.035431   60421 node_conditions.go:102] verifying NodePressure condition ...
	I1216 21:00:07.044467   60421 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 21:00:07.044592   60421 node_conditions.go:123] node cpu capacity is 2
	I1216 21:00:07.044633   60421 node_conditions.go:105] duration metric: took 9.191874ms to run NodePressure ...
	I1216 21:00:07.044668   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:07.388388   60421 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1216 21:00:07.394851   60421 kubeadm.go:739] kubelet initialised
	I1216 21:00:07.394881   60421 kubeadm.go:740] duration metric: took 6.459945ms waiting for restarted kubelet to initialise ...
	I1216 21:00:07.394891   60421 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:00:07.401877   60421 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-snhjf" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:07.410697   60421 pod_ready.go:98] node "no-preload-232338" hosting pod "coredns-668d6bf9bc-snhjf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.410732   60421 pod_ready.go:82] duration metric: took 8.80876ms for pod "coredns-668d6bf9bc-snhjf" in "kube-system" namespace to be "Ready" ...
	E1216 21:00:07.410744   60421 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-232338" hosting pod "coredns-668d6bf9bc-snhjf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.410755   60421 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:07.418118   60421 pod_ready.go:98] node "no-preload-232338" hosting pod "etcd-no-preload-232338" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.418149   60421 pod_ready.go:82] duration metric: took 7.383445ms for pod "etcd-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	E1216 21:00:07.418163   60421 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-232338" hosting pod "etcd-no-preload-232338" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.418172   60421 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:07.427341   60421 pod_ready.go:98] node "no-preload-232338" hosting pod "kube-apiserver-no-preload-232338" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.427414   60421 pod_ready.go:82] duration metric: took 9.234588ms for pod "kube-apiserver-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	E1216 21:00:07.427424   60421 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-232338" hosting pod "kube-apiserver-no-preload-232338" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.427432   60421 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:07.435329   60421 pod_ready.go:98] node "no-preload-232338" hosting pod "kube-controller-manager-no-preload-232338" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.435378   60421 pod_ready.go:82] duration metric: took 7.931923ms for pod "kube-controller-manager-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	E1216 21:00:07.435392   60421 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-232338" hosting pod "kube-controller-manager-no-preload-232338" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.435408   60421 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5mw2b" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:04.457220   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:04.457399   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:04.457507   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:04.457596   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1216 21:00:04.457687   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1216 21:00:04.613834   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:04.613870   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:04.613923   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:04.613931   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:04.613960   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:04.613972   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1216 21:00:04.619915   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1216 21:00:04.791265   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1216 21:00:04.791297   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:04.791315   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1216 21:00:04.791352   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1216 21:00:04.791366   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1216 21:00:04.791384   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1216 21:00:04.836463   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1216 21:00:04.836536   60933 cache_images.go:92] duration metric: took 1.168508622s to LoadCachedImages
	W1216 21:00:04.836633   60933 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I1216 21:00:04.836649   60933 kubeadm.go:934] updating node { 192.168.72.240 8443 v1.20.0 crio true true} ...
	I1216 21:00:04.836781   60933 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-847766 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-847766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 21:00:04.836877   60933 ssh_runner.go:195] Run: crio config
	I1216 21:00:04.898330   60933 cni.go:84] Creating CNI manager for ""
	I1216 21:00:04.898357   60933 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:00:04.898371   60933 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 21:00:04.898396   60933 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.240 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-847766 NodeName:old-k8s-version-847766 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1216 21:00:04.898560   60933 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-847766"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.240
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.240"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 21:00:04.898643   60933 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1216 21:00:04.910946   60933 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 21:00:04.911045   60933 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 21:00:04.923199   60933 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1216 21:00:04.942705   60933 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 21:00:04.976598   60933 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1216 21:00:05.001967   60933 ssh_runner.go:195] Run: grep 192.168.72.240	control-plane.minikube.internal$ /etc/hosts
	I1216 21:00:05.006819   60933 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.240	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 21:00:05.020604   60933 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 21:00:05.143039   60933 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 21:00:05.162507   60933 certs.go:68] Setting up /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766 for IP: 192.168.72.240
	I1216 21:00:05.162535   60933 certs.go:194] generating shared ca certs ...
	I1216 21:00:05.162554   60933 certs.go:226] acquiring lock for ca certs: {Name:mk7f8f83a04be3d39897a025f51d4d8228b5a509 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:00:05.162749   60933 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key
	I1216 21:00:05.162792   60933 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key
	I1216 21:00:05.162803   60933 certs.go:256] generating profile certs ...
	I1216 21:00:05.162907   60933 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/client.key
	I1216 21:00:05.162976   60933 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.key.6c8704df
	I1216 21:00:05.163011   60933 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/proxy-client.key
	I1216 21:00:05.163148   60933 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem (1338 bytes)
	W1216 21:00:05.163176   60933 certs.go:480] ignoring /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254_empty.pem, impossibly tiny 0 bytes
	I1216 21:00:05.163186   60933 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 21:00:05.163210   60933 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem (1082 bytes)
	I1216 21:00:05.163278   60933 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem (1123 bytes)
	I1216 21:00:05.163315   60933 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem (1679 bytes)
	I1216 21:00:05.163379   60933 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem (1708 bytes)
	I1216 21:00:05.164216   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 21:00:05.222491   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 21:00:05.253517   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 21:00:05.294338   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 21:00:05.342850   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1216 21:00:05.388068   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 21:00:05.422591   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 21:00:05.471916   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 21:00:05.505836   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem --> /usr/share/ca-certificates/14254.pem (1338 bytes)
	I1216 21:00:05.539404   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /usr/share/ca-certificates/142542.pem (1708 bytes)
	I1216 21:00:05.570819   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 21:00:05.602079   60933 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 21:00:05.630577   60933 ssh_runner.go:195] Run: openssl version
	I1216 21:00:05.640017   60933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142542.pem && ln -fs /usr/share/ca-certificates/142542.pem /etc/ssl/certs/142542.pem"
	I1216 21:00:05.653759   60933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142542.pem
	I1216 21:00:05.659573   60933 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 19:42 /usr/share/ca-certificates/142542.pem
	I1216 21:00:05.659645   60933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142542.pem
	I1216 21:00:05.666667   60933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142542.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 21:00:05.680061   60933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 21:00:05.692776   60933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:00:05.698644   60933 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:00:05.698728   60933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:00:05.705913   60933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 21:00:05.730062   60933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14254.pem && ln -fs /usr/share/ca-certificates/14254.pem /etc/ssl/certs/14254.pem"
	I1216 21:00:05.750034   60933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14254.pem
	I1216 21:00:05.757158   60933 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 19:42 /usr/share/ca-certificates/14254.pem
	I1216 21:00:05.757252   60933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14254.pem
	I1216 21:00:05.765679   60933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14254.pem /etc/ssl/certs/51391683.0"
	I1216 21:00:05.782537   60933 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 21:00:05.788291   60933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 21:00:05.797707   60933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 21:00:05.807016   60933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 21:00:05.818160   60933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 21:00:05.827428   60933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 21:00:05.836499   60933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 21:00:05.846104   60933 kubeadm.go:392] StartCluster: {Name:old-k8s-version-847766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-847766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 21:00:05.846244   60933 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 21:00:05.846331   60933 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 21:00:05.901274   60933 cri.go:89] found id: ""
	I1216 21:00:05.901376   60933 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 21:00:05.917353   60933 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1216 21:00:05.917381   60933 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1216 21:00:05.917439   60933 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 21:00:05.928587   60933 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 21:00:05.932546   60933 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-847766" does not appear in /home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 21:00:05.933844   60933 kubeconfig.go:62] /home/jenkins/minikube-integration/20091-7083/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-847766" cluster setting kubeconfig missing "old-k8s-version-847766" context setting]
	I1216 21:00:05.935400   60933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/kubeconfig: {Name:mk67073c6dc9abd712825d4490d6430745897f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:00:05.938054   60933 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 21:00:05.950384   60933 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.240
	I1216 21:00:05.950433   60933 kubeadm.go:1160] stopping kube-system containers ...
	I1216 21:00:05.950450   60933 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 21:00:05.950515   60933 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 21:00:05.999495   60933 cri.go:89] found id: ""
	I1216 21:00:05.999588   60933 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 21:00:06.024765   60933 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:00:06.037807   60933 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:00:06.037836   60933 kubeadm.go:157] found existing configuration files:
	
	I1216 21:00:06.037894   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 21:00:06.048926   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:00:06.048997   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:00:06.060167   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 21:00:06.070841   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:00:06.070910   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:00:06.083517   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 21:00:06.099124   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:00:06.099214   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:00:06.110004   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 21:00:06.125600   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:00:06.125668   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:00:06.137212   60933 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 21:00:06.148873   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:06.316611   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:07.220187   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:07.549730   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:07.698864   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:07.815495   60933 api_server.go:52] waiting for apiserver process to appear ...
	I1216 21:00:07.815657   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:08.316003   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:08.816482   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:09.315805   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:05.050699   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:05.051378   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:05.051413   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:05.051296   62108 retry.go:31] will retry after 2.184829789s: waiting for machine to come up
	I1216 21:00:07.237618   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:07.238137   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:07.238166   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:07.238049   62108 retry.go:31] will retry after 2.531717629s: waiting for machine to come up
	I1216 21:00:05.713060   60829 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:05.798544   60829 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:05.798569   60829 pod_ready.go:82] duration metric: took 2.508055323s for pod "kube-controller-manager-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:05.798582   60829 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-mplxr" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:05.805322   60829 pod_ready.go:93] pod "kube-proxy-mplxr" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:05.805361   60829 pod_ready.go:82] duration metric: took 6.77ms for pod "kube-proxy-mplxr" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:05.805399   60829 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:05.812700   60829 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:05.812727   60829 pod_ready.go:82] duration metric: took 7.281992ms for pod "kube-scheduler-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:05.812741   60829 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:07.822004   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:10.321160   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:09.443582   60421 pod_ready.go:103] pod "kube-proxy-5mw2b" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:11.443796   60421 pod_ready.go:103] pod "kube-proxy-5mw2b" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:09.815863   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:10.316664   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:10.815852   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:11.316175   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:11.816446   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:12.316040   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:12.816172   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:13.316460   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:13.815700   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:14.316469   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:09.772318   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:09.772837   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:09.772869   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:09.772797   62108 retry.go:31] will retry after 2.557982234s: waiting for machine to come up
	I1216 21:00:12.331877   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:12.332340   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:12.332368   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:12.332298   62108 retry.go:31] will retry after 4.202991569s: waiting for machine to come up
	I1216 21:00:12.322897   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:14.323015   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:13.942154   60421 pod_ready.go:103] pod "kube-proxy-5mw2b" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:16.442411   60421 pod_ready.go:103] pod "kube-proxy-5mw2b" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:14.816539   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:15.315737   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:15.816465   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:16.316470   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:16.816451   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:17.316485   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:17.816470   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:18.316165   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:18.816448   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:19.315972   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:16.539792   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.540299   60215 main.go:141] libmachine: (embed-certs-606219) Found IP for machine: 192.168.61.151
	I1216 21:00:16.540324   60215 main.go:141] libmachine: (embed-certs-606219) Reserving static IP address...
	I1216 21:00:16.540341   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has current primary IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.540771   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "embed-certs-606219", mac: "52:54:00:63:37:8f", ip: "192.168.61.151"} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:16.540810   60215 main.go:141] libmachine: (embed-certs-606219) DBG | skip adding static IP to network mk-embed-certs-606219 - found existing host DHCP lease matching {name: "embed-certs-606219", mac: "52:54:00:63:37:8f", ip: "192.168.61.151"}
	I1216 21:00:16.540827   60215 main.go:141] libmachine: (embed-certs-606219) Reserved static IP address: 192.168.61.151
	I1216 21:00:16.540839   60215 main.go:141] libmachine: (embed-certs-606219) Waiting for SSH to be available...
	I1216 21:00:16.540847   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Getting to WaitForSSH function...
	I1216 21:00:16.542958   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.543461   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:16.543503   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.543629   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Using SSH client type: external
	I1216 21:00:16.543663   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Using SSH private key: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa (-rw-------)
	I1216 21:00:16.543696   60215 main.go:141] libmachine: (embed-certs-606219) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.151 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1216 21:00:16.543713   60215 main.go:141] libmachine: (embed-certs-606219) DBG | About to run SSH command:
	I1216 21:00:16.543732   60215 main.go:141] libmachine: (embed-certs-606219) DBG | exit 0
	I1216 21:00:16.671576   60215 main.go:141] libmachine: (embed-certs-606219) DBG | SSH cmd err, output: <nil>: 
	I1216 21:00:16.671965   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetConfigRaw
	I1216 21:00:16.672599   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetIP
	I1216 21:00:16.675179   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.675520   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:16.675549   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.675726   60215 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/config.json ...
	I1216 21:00:16.675938   60215 machine.go:93] provisionDockerMachine start ...
	I1216 21:00:16.675955   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:00:16.676186   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:16.678481   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.678824   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:16.678846   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.679020   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:16.679203   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:16.679388   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:16.679530   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:16.679689   60215 main.go:141] libmachine: Using SSH client type: native
	I1216 21:00:16.679883   60215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I1216 21:00:16.679896   60215 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 21:00:16.791925   60215 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1216 21:00:16.791959   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetMachineName
	I1216 21:00:16.792224   60215 buildroot.go:166] provisioning hostname "embed-certs-606219"
	I1216 21:00:16.792261   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetMachineName
	I1216 21:00:16.792492   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:16.794967   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.795359   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:16.795388   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.795496   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:16.795674   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:16.795845   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:16.795995   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:16.796238   60215 main.go:141] libmachine: Using SSH client type: native
	I1216 21:00:16.796466   60215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I1216 21:00:16.796486   60215 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-606219 && echo "embed-certs-606219" | sudo tee /etc/hostname
	I1216 21:00:16.923887   60215 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-606219
	
	I1216 21:00:16.923922   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:16.926689   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.927228   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:16.927283   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.927500   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:16.927724   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:16.927943   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:16.928139   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:16.928396   60215 main.go:141] libmachine: Using SSH client type: native
	I1216 21:00:16.928574   60215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I1216 21:00:16.928590   60215 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-606219' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-606219/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-606219' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 21:00:17.045462   60215 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 21:00:17.045508   60215 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20091-7083/.minikube CaCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20091-7083/.minikube}
	I1216 21:00:17.045540   60215 buildroot.go:174] setting up certificates
	I1216 21:00:17.045560   60215 provision.go:84] configureAuth start
	I1216 21:00:17.045578   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetMachineName
	I1216 21:00:17.045889   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetIP
	I1216 21:00:17.048733   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.049038   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:17.049062   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.049216   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:17.051371   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.051713   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:17.051748   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.051861   60215 provision.go:143] copyHostCerts
	I1216 21:00:17.051940   60215 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem, removing ...
	I1216 21:00:17.051954   60215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem
	I1216 21:00:17.052033   60215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem (1082 bytes)
	I1216 21:00:17.052187   60215 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem, removing ...
	I1216 21:00:17.052203   60215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem
	I1216 21:00:17.052230   60215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem (1123 bytes)
	I1216 21:00:17.052306   60215 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem, removing ...
	I1216 21:00:17.052317   60215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem
	I1216 21:00:17.052342   60215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem (1679 bytes)
	I1216 21:00:17.052413   60215 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem org=jenkins.embed-certs-606219 san=[127.0.0.1 192.168.61.151 embed-certs-606219 localhost minikube]
	I1216 21:00:17.345020   60215 provision.go:177] copyRemoteCerts
	I1216 21:00:17.345079   60215 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 21:00:17.345116   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:17.348019   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.348323   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:17.348350   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.348554   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:17.348783   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:17.348931   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:17.349093   60215 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 21:00:17.434520   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 21:00:17.462097   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1216 21:00:17.488071   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 21:00:17.516428   60215 provision.go:87] duration metric: took 470.851303ms to configureAuth
	I1216 21:00:17.516461   60215 buildroot.go:189] setting minikube options for container-runtime
	I1216 21:00:17.516673   60215 config.go:182] Loaded profile config "embed-certs-606219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 21:00:17.516763   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:17.519637   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.519981   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:17.520019   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.520229   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:17.520451   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:17.520654   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:17.520813   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:17.520977   60215 main.go:141] libmachine: Using SSH client type: native
	I1216 21:00:17.521148   60215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I1216 21:00:17.521166   60215 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 21:00:17.787052   60215 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 21:00:17.787084   60215 machine.go:96] duration metric: took 1.111132885s to provisionDockerMachine
	I1216 21:00:17.787111   60215 start.go:293] postStartSetup for "embed-certs-606219" (driver="kvm2")
	I1216 21:00:17.787126   60215 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 21:00:17.787145   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:00:17.787551   60215 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 21:00:17.787588   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:17.790332   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.790710   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:17.790743   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.790891   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:17.791130   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:17.791336   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:17.791492   60215 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 21:00:17.881548   60215 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 21:00:17.886692   60215 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 21:00:17.886720   60215 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/addons for local assets ...
	I1216 21:00:17.886788   60215 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/files for local assets ...
	I1216 21:00:17.886886   60215 filesync.go:149] local asset: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem -> 142542.pem in /etc/ssl/certs
	I1216 21:00:17.886983   60215 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 21:00:17.897832   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /etc/ssl/certs/142542.pem (1708 bytes)
	I1216 21:00:17.926273   60215 start.go:296] duration metric: took 139.147156ms for postStartSetup
	I1216 21:00:17.926316   60215 fix.go:56] duration metric: took 21.229856025s for fixHost
	I1216 21:00:17.926338   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:17.929204   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.929600   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:17.929623   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.929809   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:17.930036   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:17.930220   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:17.930411   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:17.930554   60215 main.go:141] libmachine: Using SSH client type: native
	I1216 21:00:17.930723   60215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I1216 21:00:17.930734   60215 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 21:00:18.040530   60215 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734382817.988837134
	
	I1216 21:00:18.040557   60215 fix.go:216] guest clock: 1734382817.988837134
	I1216 21:00:18.040590   60215 fix.go:229] Guest: 2024-12-16 21:00:17.988837134 +0000 UTC Remote: 2024-12-16 21:00:17.926320778 +0000 UTC m=+358.266755361 (delta=62.516356ms)
	I1216 21:00:18.040639   60215 fix.go:200] guest clock delta is within tolerance: 62.516356ms
	I1216 21:00:18.040650   60215 start.go:83] releasing machines lock for "embed-certs-606219", held for 21.34422537s
	I1216 21:00:18.040682   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:00:18.040997   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetIP
	I1216 21:00:18.044100   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:18.044549   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:18.044584   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:18.044727   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:00:18.045237   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:00:18.045454   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:00:18.045544   60215 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 21:00:18.045602   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:18.045673   60215 ssh_runner.go:195] Run: cat /version.json
	I1216 21:00:18.045702   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:18.048852   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:18.049066   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:18.049259   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:18.049285   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:18.049423   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:18.049578   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:18.049610   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:18.049611   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:18.049688   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:18.049885   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:18.049908   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:18.050090   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:18.050082   60215 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 21:00:18.050313   60215 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 21:00:18.128381   60215 ssh_runner.go:195] Run: systemctl --version
	I1216 21:00:18.165162   60215 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 21:00:18.313679   60215 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 21:00:18.321330   60215 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 21:00:18.321407   60215 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 21:00:18.340577   60215 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 21:00:18.340601   60215 start.go:495] detecting cgroup driver to use...
	I1216 21:00:18.340672   60215 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 21:00:18.357273   60215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 21:00:18.373169   60215 docker.go:217] disabling cri-docker service (if available) ...
	I1216 21:00:18.373231   60215 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 21:00:18.387904   60215 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 21:00:18.402499   60215 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 21:00:18.528830   60215 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 21:00:18.677746   60215 docker.go:233] disabling docker service ...
	I1216 21:00:18.677839   60215 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 21:00:18.693059   60215 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 21:00:18.707368   60215 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 21:00:18.870936   60215 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 21:00:19.011321   60215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 21:00:19.025645   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 21:00:19.045618   60215 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1216 21:00:19.045695   60215 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:00:19.056739   60215 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 21:00:19.056813   60215 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:00:19.067975   60215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:00:19.078954   60215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:00:19.090165   60215 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 21:00:19.101906   60215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:00:19.112949   60215 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:00:19.131186   60215 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:00:19.142238   60215 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 21:00:19.152768   60215 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 21:00:19.152830   60215 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 21:00:19.169166   60215 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 21:00:19.188991   60215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 21:00:19.319083   60215 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 21:00:19.427266   60215 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 21:00:19.427377   60215 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 21:00:19.432716   60215 start.go:563] Will wait 60s for crictl version
	I1216 21:00:19.432793   60215 ssh_runner.go:195] Run: which crictl
	I1216 21:00:19.437514   60215 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 21:00:19.484613   60215 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 21:00:19.484726   60215 ssh_runner.go:195] Run: crio --version
	I1216 21:00:19.519451   60215 ssh_runner.go:195] Run: crio --version
	I1216 21:00:19.555298   60215 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1216 21:00:19.556696   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetIP
	I1216 21:00:19.559802   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:19.560178   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:19.560201   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:19.560467   60215 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1216 21:00:19.565180   60215 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 21:00:19.579863   60215 kubeadm.go:883] updating cluster {Name:embed-certs-606219 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.32.0 ClusterName:embed-certs-606219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.151 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 21:00:19.579991   60215 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 21:00:19.580037   60215 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 21:00:19.618480   60215 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1216 21:00:19.618556   60215 ssh_runner.go:195] Run: which lz4
	I1216 21:00:19.622839   60215 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 21:00:19.627438   60215 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 21:00:19.627482   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1216 21:00:16.819610   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:19.326427   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:17.942107   60421 pod_ready.go:93] pod "kube-proxy-5mw2b" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:17.942148   60421 pod_ready.go:82] duration metric: took 10.506728599s for pod "kube-proxy-5mw2b" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:17.942161   60421 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:17.948518   60421 pod_ready.go:93] pod "kube-scheduler-no-preload-232338" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:17.948540   60421 pod_ready.go:82] duration metric: took 6.372903ms for pod "kube-scheduler-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:17.948549   60421 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:19.956992   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:21.957271   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:19.815807   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:20.316465   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:20.816461   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:21.316731   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:21.816637   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:22.315727   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:22.816447   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:23.316510   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:23.816408   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:24.316454   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:21.237863   60215 crio.go:462] duration metric: took 1.615059209s to copy over tarball
	I1216 21:00:21.237956   60215 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 21:00:23.572502   60215 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.33450798s)
	I1216 21:00:23.572535   60215 crio.go:469] duration metric: took 2.334633133s to extract the tarball
	I1216 21:00:23.572549   60215 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 21:00:23.613530   60215 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 21:00:23.667777   60215 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 21:00:23.667807   60215 cache_images.go:84] Images are preloaded, skipping loading
	I1216 21:00:23.667815   60215 kubeadm.go:934] updating node { 192.168.61.151 8443 v1.32.0 crio true true} ...
	I1216 21:00:23.667929   60215 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-606219 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.151
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:embed-certs-606219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 21:00:23.668009   60215 ssh_runner.go:195] Run: crio config
	I1216 21:00:23.716162   60215 cni.go:84] Creating CNI manager for ""
	I1216 21:00:23.716184   60215 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:00:23.716192   60215 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 21:00:23.716211   60215 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.151 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-606219 NodeName:embed-certs-606219 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.151"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.151 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 21:00:23.716337   60215 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.151
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-606219"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.151"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.151"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 21:00:23.716393   60215 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1216 21:00:23.727236   60215 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 21:00:23.727337   60215 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 21:00:23.737632   60215 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1216 21:00:23.757380   60215 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 21:00:23.774863   60215 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I1216 21:00:23.795070   60215 ssh_runner.go:195] Run: grep 192.168.61.151	control-plane.minikube.internal$ /etc/hosts
	I1216 21:00:23.799453   60215 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.151	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 21:00:23.814278   60215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 21:00:23.962200   60215 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 21:00:23.981947   60215 certs.go:68] Setting up /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219 for IP: 192.168.61.151
	I1216 21:00:23.981976   60215 certs.go:194] generating shared ca certs ...
	I1216 21:00:23.981999   60215 certs.go:226] acquiring lock for ca certs: {Name:mk7f8f83a04be3d39897a025f51d4d8228b5a509 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:00:23.982156   60215 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key
	I1216 21:00:23.982197   60215 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key
	I1216 21:00:23.982204   60215 certs.go:256] generating profile certs ...
	I1216 21:00:23.982280   60215 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/client.key
	I1216 21:00:23.982336   60215 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/apiserver.key.b346be49
	I1216 21:00:23.982376   60215 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/proxy-client.key
	I1216 21:00:23.982483   60215 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem (1338 bytes)
	W1216 21:00:23.982513   60215 certs.go:480] ignoring /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254_empty.pem, impossibly tiny 0 bytes
	I1216 21:00:23.982523   60215 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 21:00:23.982555   60215 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem (1082 bytes)
	I1216 21:00:23.982582   60215 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem (1123 bytes)
	I1216 21:00:23.982602   60215 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem (1679 bytes)
	I1216 21:00:23.982655   60215 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem (1708 bytes)
	I1216 21:00:23.983524   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 21:00:24.015369   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 21:00:24.043889   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 21:00:24.087807   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 21:00:24.137438   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1216 21:00:24.174859   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 21:00:24.200220   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 21:00:24.225811   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 21:00:24.251567   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /usr/share/ca-certificates/142542.pem (1708 bytes)
	I1216 21:00:24.276737   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 21:00:24.302541   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem --> /usr/share/ca-certificates/14254.pem (1338 bytes)
	I1216 21:00:24.329876   60215 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 21:00:24.350133   60215 ssh_runner.go:195] Run: openssl version
	I1216 21:00:24.356984   60215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142542.pem && ln -fs /usr/share/ca-certificates/142542.pem /etc/ssl/certs/142542.pem"
	I1216 21:00:24.371219   60215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142542.pem
	I1216 21:00:24.376759   60215 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 19:42 /usr/share/ca-certificates/142542.pem
	I1216 21:00:24.376816   60215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142542.pem
	I1216 21:00:24.383725   60215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142542.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 21:00:24.397759   60215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 21:00:24.409836   60215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:00:24.414765   60215 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:00:24.414836   60215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:00:24.421662   60215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 21:00:24.433843   60215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14254.pem && ln -fs /usr/share/ca-certificates/14254.pem /etc/ssl/certs/14254.pem"
	I1216 21:00:24.447839   60215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14254.pem
	I1216 21:00:24.453107   60215 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 19:42 /usr/share/ca-certificates/14254.pem
	I1216 21:00:24.453185   60215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14254.pem
	I1216 21:00:24.459472   60215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14254.pem /etc/ssl/certs/51391683.0"
	I1216 21:00:24.471714   60215 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 21:00:24.476881   60215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 21:00:24.486263   60215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 21:00:24.493146   60215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 21:00:24.500093   60215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 21:00:24.506599   60215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 21:00:24.512946   60215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 21:00:24.519699   60215 kubeadm.go:392] StartCluster: {Name:embed-certs-606219 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32
.0 ClusterName:embed-certs-606219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.151 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 21:00:24.519780   60215 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 21:00:24.519861   60215 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 21:00:24.570867   60215 cri.go:89] found id: ""
	I1216 21:00:24.570952   60215 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 21:00:24.583857   60215 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1216 21:00:24.583887   60215 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1216 21:00:24.583943   60215 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 21:00:24.595709   60215 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 21:00:24.596734   60215 kubeconfig.go:125] found "embed-certs-606219" server: "https://192.168.61.151:8443"
	I1216 21:00:24.598569   60215 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 21:00:24.609876   60215 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.151
	I1216 21:00:24.609905   60215 kubeadm.go:1160] stopping kube-system containers ...
	I1216 21:00:24.609917   60215 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 21:00:24.609964   60215 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 21:00:24.654487   60215 cri.go:89] found id: ""
	I1216 21:00:24.654567   60215 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 21:00:24.676658   60215 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:00:24.689546   60215 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:00:24.689571   60215 kubeadm.go:157] found existing configuration files:
	
	I1216 21:00:24.689615   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 21:00:21.819876   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:23.820061   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:23.957368   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:26.556301   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:24.816467   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:25.315789   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:25.816410   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:26.316537   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:26.816144   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:27.316659   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:27.816126   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:28.316568   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:28.816151   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:29.316485   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:24.700928   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:00:24.701012   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:00:24.713438   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 21:00:24.725184   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:00:24.725257   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:00:24.737483   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 21:00:24.749488   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:00:24.749546   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:00:24.762322   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 21:00:24.774309   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:00:24.774391   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:00:24.787008   60215 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 21:00:24.798394   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:25.009799   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:25.917432   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:26.175602   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:26.279646   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:26.362472   60215 api_server.go:52] waiting for apiserver process to appear ...
	I1216 21:00:26.362564   60215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:26.862646   60215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:27.362663   60215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:27.421335   60215 api_server.go:72] duration metric: took 1.058863872s to wait for apiserver process to appear ...
	I1216 21:00:27.421361   60215 api_server.go:88] waiting for apiserver healthz status ...
	I1216 21:00:27.421380   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:00:27.421869   60215 api_server.go:269] stopped: https://192.168.61.151:8443/healthz: Get "https://192.168.61.151:8443/healthz": dial tcp 192.168.61.151:8443: connect: connection refused
	I1216 21:00:27.921493   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:00:26.471175   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:28.819200   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:30.365380   60215 api_server.go:279] https://192.168.61.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 21:00:30.365410   60215 api_server.go:103] status: https://192.168.61.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 21:00:30.365425   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:00:30.416044   60215 api_server.go:279] https://192.168.61.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 21:00:30.416078   60215 api_server.go:103] status: https://192.168.61.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 21:00:30.422219   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:00:30.432135   60215 api_server.go:279] https://192.168.61.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 21:00:30.432161   60215 api_server.go:103] status: https://192.168.61.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 21:00:30.921790   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:00:30.929160   60215 api_server.go:279] https://192.168.61.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 21:00:30.929192   60215 api_server.go:103] status: https://192.168.61.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 21:00:31.421708   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:00:31.432805   60215 api_server.go:279] https://192.168.61.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 21:00:31.432839   60215 api_server.go:103] status: https://192.168.61.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 21:00:31.922000   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:00:31.933658   60215 api_server.go:279] https://192.168.61.151:8443/healthz returned 200:
	ok
	I1216 21:00:31.945496   60215 api_server.go:141] control plane version: v1.32.0
	I1216 21:00:31.945534   60215 api_server.go:131] duration metric: took 4.524165612s to wait for apiserver health ...
	I1216 21:00:31.945546   60215 cni.go:84] Creating CNI manager for ""
	I1216 21:00:31.945555   60215 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:00:31.947456   60215 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 21:00:28.954572   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:30.955397   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:29.816510   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:30.315756   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:30.815774   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:31.316516   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:31.816503   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:32.316499   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:32.816455   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:33.316478   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:33.816363   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:34.316057   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:31.948727   60215 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 21:00:31.977877   60215 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 21:00:32.014745   60215 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 21:00:32.027268   60215 system_pods.go:59] 8 kube-system pods found
	I1216 21:00:32.027303   60215 system_pods.go:61] "coredns-668d6bf9bc-rp29f" [0135dcef-2324-49ec-b459-f34b73efd82b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:00:32.027311   60215 system_pods.go:61] "etcd-embed-certs-606219" [05f01ef3-5d92-4d16-9643-0f56df3869f6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 21:00:32.027320   60215 system_pods.go:61] "kube-apiserver-embed-certs-606219" [4294c469-e47a-4722-a620-92c33d23b41e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 21:00:32.027326   60215 system_pods.go:61] "kube-controller-manager-embed-certs-606219" [cc8452e6-ca00-44dd-8d77-897df20d37f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 21:00:32.027354   60215 system_pods.go:61] "kube-proxy-8t495" [492be5cc-7d3a-4983-9bc7-14091bef7b43] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 21:00:32.027362   60215 system_pods.go:61] "kube-scheduler-embed-certs-606219" [63c42d73-a17a-4b37-a585-f7db5923c493] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 21:00:32.027376   60215 system_pods.go:61] "metrics-server-f79f97bbb-d6gmd" [50916d48-ee33-4e96-9507-c486d8ac7f7d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:00:32.027387   60215 system_pods.go:61] "storage-provisioner" [1164651f-c3b5-445f-882a-60eb2f2fb3f8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 21:00:32.027399   60215 system_pods.go:74] duration metric: took 12.633182ms to wait for pod list to return data ...
	I1216 21:00:32.027409   60215 node_conditions.go:102] verifying NodePressure condition ...
	I1216 21:00:32.041648   60215 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 21:00:32.041677   60215 node_conditions.go:123] node cpu capacity is 2
	I1216 21:00:32.041686   60215 node_conditions.go:105] duration metric: took 14.27317ms to run NodePressure ...
	I1216 21:00:32.041704   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:32.492472   60215 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1216 21:00:32.504237   60215 kubeadm.go:739] kubelet initialised
	I1216 21:00:32.504271   60215 kubeadm.go:740] duration metric: took 11.772175ms waiting for restarted kubelet to initialise ...
	I1216 21:00:32.504282   60215 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:00:32.525531   60215 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:34.531954   60215 pod_ready.go:103] pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:31.321998   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:33.325288   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:32.959143   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:35.454928   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:37.455474   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:34.815839   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:35.316503   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:35.816590   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:36.316231   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:36.816011   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:37.316485   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:37.816494   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:38.316486   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:38.816475   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:39.315762   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:36.534516   60215 pod_ready.go:103] pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:39.032255   60215 pod_ready.go:103] pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:35.819575   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:38.322139   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:40.322804   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:39.456089   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:41.955128   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:39.816009   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:40.316444   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:40.816493   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:41.315869   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:41.816495   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:42.316034   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:42.816422   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:43.316432   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:43.815875   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:44.316036   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:41.032545   60215 pod_ready.go:103] pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:43.534471   60215 pod_ready.go:103] pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:42.819610   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:44.820561   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:43.955190   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:46.455540   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:44.816293   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:45.316458   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:45.815992   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:46.316054   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:46.816449   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:47.316113   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:47.816514   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:48.316353   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:48.816144   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:49.316435   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:45.031682   60215 pod_ready.go:93] pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:45.031705   60215 pod_ready.go:82] duration metric: took 12.506146086s for pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.031715   60215 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.038109   60215 pod_ready.go:93] pod "etcd-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:45.038138   60215 pod_ready.go:82] duration metric: took 6.416609ms for pod "etcd-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.038149   60215 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.043764   60215 pod_ready.go:93] pod "kube-apiserver-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:45.043784   60215 pod_ready.go:82] duration metric: took 5.621982ms for pod "kube-apiserver-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.043793   60215 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.053376   60215 pod_ready.go:93] pod "kube-controller-manager-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:45.053399   60215 pod_ready.go:82] duration metric: took 9.600142ms for pod "kube-controller-manager-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.053409   60215 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-8t495" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.058956   60215 pod_ready.go:93] pod "kube-proxy-8t495" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:45.058976   60215 pod_ready.go:82] duration metric: took 5.561188ms for pod "kube-proxy-8t495" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.058984   60215 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.429908   60215 pod_ready.go:93] pod "kube-scheduler-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:45.429932   60215 pod_ready.go:82] duration metric: took 370.942192ms for pod "kube-scheduler-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.429942   60215 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:47.438759   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:47.323605   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:49.819763   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:48.456270   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:50.955190   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:49.815935   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:50.316437   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:50.816335   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:51.315747   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:51.816504   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:52.315695   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:52.816115   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:53.316498   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:53.816529   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:54.315689   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:49.935961   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:51.937245   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:53.937302   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:51.820266   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:53.820748   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:52.956645   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:55.456064   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:54.816019   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:55.316484   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:55.816517   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:56.315858   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:56.816306   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:57.316447   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:57.815879   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:58.316493   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:58.816395   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:59.316225   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:56.437390   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:58.938617   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:56.323619   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:58.820330   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:57.956401   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:00.456844   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:02.457677   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:59.816440   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:00.315769   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:00.816285   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:01.316020   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:01.818175   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:02.315780   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:02.816411   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:03.315758   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:03.815810   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:04.316731   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:01.436856   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:03.436945   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:00.820484   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:03.323328   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:04.955714   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:07.455361   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:04.816470   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:05.316528   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:05.815792   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:06.316491   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:06.815977   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:07.316002   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:07.816043   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:07.816114   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:07.861866   60933 cri.go:89] found id: ""
	I1216 21:01:07.861896   60933 logs.go:282] 0 containers: []
	W1216 21:01:07.861906   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:07.861913   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:07.861978   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:07.905674   60933 cri.go:89] found id: ""
	I1216 21:01:07.905700   60933 logs.go:282] 0 containers: []
	W1216 21:01:07.905707   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:07.905713   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:07.905798   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:07.949936   60933 cri.go:89] found id: ""
	I1216 21:01:07.949966   60933 logs.go:282] 0 containers: []
	W1216 21:01:07.949977   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:07.949984   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:07.950048   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:07.987196   60933 cri.go:89] found id: ""
	I1216 21:01:07.987223   60933 logs.go:282] 0 containers: []
	W1216 21:01:07.987232   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:07.987237   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:07.987341   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:08.033126   60933 cri.go:89] found id: ""
	I1216 21:01:08.033156   60933 logs.go:282] 0 containers: []
	W1216 21:01:08.033168   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:08.033176   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:08.033252   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:08.072223   60933 cri.go:89] found id: ""
	I1216 21:01:08.072257   60933 logs.go:282] 0 containers: []
	W1216 21:01:08.072270   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:08.072278   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:08.072345   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:08.117257   60933 cri.go:89] found id: ""
	I1216 21:01:08.117288   60933 logs.go:282] 0 containers: []
	W1216 21:01:08.117299   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:08.117319   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:08.117389   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:08.158059   60933 cri.go:89] found id: ""
	I1216 21:01:08.158096   60933 logs.go:282] 0 containers: []
	W1216 21:01:08.158106   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:08.158119   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:08.158133   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:08.232930   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:08.232966   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:08.277173   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:08.277204   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:08.331763   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:08.331802   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:08.346150   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:08.346178   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:08.488668   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:05.437627   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:07.938294   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:05.820491   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:07.821058   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:10.322630   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:09.456101   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:11.461923   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:10.989383   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:11.003162   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:11.003266   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:11.040432   60933 cri.go:89] found id: ""
	I1216 21:01:11.040464   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.040475   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:11.040483   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:11.040547   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:11.083083   60933 cri.go:89] found id: ""
	I1216 21:01:11.083110   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.083117   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:11.083122   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:11.083182   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:11.122842   60933 cri.go:89] found id: ""
	I1216 21:01:11.122880   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.122893   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:11.122900   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:11.122969   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:11.168227   60933 cri.go:89] found id: ""
	I1216 21:01:11.168268   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.168279   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:11.168286   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:11.168359   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:11.218660   60933 cri.go:89] found id: ""
	I1216 21:01:11.218689   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.218701   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:11.218708   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:11.218774   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:11.281179   60933 cri.go:89] found id: ""
	I1216 21:01:11.281214   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.281227   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:11.281236   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:11.281315   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:11.326419   60933 cri.go:89] found id: ""
	I1216 21:01:11.326453   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.326464   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:11.326472   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:11.326535   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:11.368825   60933 cri.go:89] found id: ""
	I1216 21:01:11.368863   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.368875   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:11.368887   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:11.368905   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:11.454848   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:11.454869   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:11.454888   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:11.541685   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:11.541724   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:11.581804   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:11.581830   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:11.635800   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:11.635838   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:14.152441   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:14.167637   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:14.167720   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:14.206685   60933 cri.go:89] found id: ""
	I1216 21:01:14.206716   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.206728   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:14.206735   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:14.206796   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:14.248126   60933 cri.go:89] found id: ""
	I1216 21:01:14.248151   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.248159   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:14.248165   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:14.248215   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:14.285030   60933 cri.go:89] found id: ""
	I1216 21:01:14.285067   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.285079   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:14.285086   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:14.285151   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:14.325706   60933 cri.go:89] found id: ""
	I1216 21:01:14.325736   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.325747   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:14.325755   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:14.325820   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:14.369447   60933 cri.go:89] found id: ""
	I1216 21:01:14.369475   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.369486   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:14.369494   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:14.369557   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:10.437872   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:12.937013   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:12.820480   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:15.319910   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:13.959919   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:16.458101   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:14.407792   60933 cri.go:89] found id: ""
	I1216 21:01:14.407818   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.407826   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:14.407832   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:14.407890   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:14.448380   60933 cri.go:89] found id: ""
	I1216 21:01:14.448411   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.448419   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:14.448424   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:14.448473   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:14.487116   60933 cri.go:89] found id: ""
	I1216 21:01:14.487144   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.487154   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:14.487164   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:14.487177   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:14.547342   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:14.547390   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:14.563385   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:14.563424   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:14.637363   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:14.637394   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:14.637410   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:14.715586   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:14.715626   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:17.258974   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:17.273896   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:17.273970   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:17.317359   60933 cri.go:89] found id: ""
	I1216 21:01:17.317394   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.317405   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:17.317412   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:17.317476   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:17.361422   60933 cri.go:89] found id: ""
	I1216 21:01:17.361451   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.361462   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:17.361469   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:17.361568   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:17.401466   60933 cri.go:89] found id: ""
	I1216 21:01:17.401522   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.401534   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:17.401544   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:17.401614   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:17.439560   60933 cri.go:89] found id: ""
	I1216 21:01:17.439588   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.439597   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:17.439603   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:17.439655   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:17.480310   60933 cri.go:89] found id: ""
	I1216 21:01:17.480333   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.480340   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:17.480345   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:17.480393   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:17.528562   60933 cri.go:89] found id: ""
	I1216 21:01:17.528589   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.528600   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:17.528607   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:17.528671   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:17.569863   60933 cri.go:89] found id: ""
	I1216 21:01:17.569900   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.569908   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:17.569914   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:17.569975   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:17.610840   60933 cri.go:89] found id: ""
	I1216 21:01:17.610867   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.610875   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:17.610884   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:17.610895   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:17.661002   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:17.661041   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:17.675290   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:17.675318   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:17.743550   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:17.743572   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:17.743584   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:17.824479   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:17.824524   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:15.437260   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:17.937487   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:17.324337   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:19.819325   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:18.956605   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:20.957030   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:20.373687   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:20.389149   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:20.389244   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:20.429594   60933 cri.go:89] found id: ""
	I1216 21:01:20.429626   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.429634   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:20.429639   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:20.429693   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:20.473157   60933 cri.go:89] found id: ""
	I1216 21:01:20.473185   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.473193   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:20.473198   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:20.473264   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:20.512549   60933 cri.go:89] found id: ""
	I1216 21:01:20.512586   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.512597   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:20.512604   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:20.512676   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:20.549275   60933 cri.go:89] found id: ""
	I1216 21:01:20.549310   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.549323   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:20.549344   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:20.549408   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:20.587405   60933 cri.go:89] found id: ""
	I1216 21:01:20.587435   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.587443   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:20.587449   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:20.587515   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:20.625364   60933 cri.go:89] found id: ""
	I1216 21:01:20.625393   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.625400   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:20.625406   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:20.625456   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:20.664018   60933 cri.go:89] found id: ""
	I1216 21:01:20.664050   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.664061   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:20.664068   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:20.664117   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:20.703860   60933 cri.go:89] found id: ""
	I1216 21:01:20.703890   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.703898   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:20.703906   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:20.703918   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:20.754433   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:20.754470   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:20.770136   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:20.770172   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:20.854025   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:20.854049   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:20.854061   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:20.939628   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:20.939661   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:23.489645   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:23.503603   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:23.503667   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:23.543044   60933 cri.go:89] found id: ""
	I1216 21:01:23.543070   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.543077   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:23.543083   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:23.543131   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:23.580333   60933 cri.go:89] found id: ""
	I1216 21:01:23.580362   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.580371   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:23.580377   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:23.580428   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:23.616732   60933 cri.go:89] found id: ""
	I1216 21:01:23.616766   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.616778   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:23.616785   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:23.616834   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:23.655771   60933 cri.go:89] found id: ""
	I1216 21:01:23.655793   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.655801   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:23.655807   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:23.655861   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:23.694400   60933 cri.go:89] found id: ""
	I1216 21:01:23.694430   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.694437   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:23.694443   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:23.694500   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:23.732592   60933 cri.go:89] found id: ""
	I1216 21:01:23.732622   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.732630   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:23.732636   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:23.732688   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:23.769752   60933 cri.go:89] found id: ""
	I1216 21:01:23.769787   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.769801   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:23.769810   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:23.769892   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:23.806891   60933 cri.go:89] found id: ""
	I1216 21:01:23.806925   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.806936   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:23.806947   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:23.806963   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:23.822887   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:23.822912   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:23.898795   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:23.898817   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:23.898830   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:23.978036   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:23.978073   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:24.032500   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:24.032528   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:20.437888   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:22.936895   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:21.819859   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:23.820383   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:23.456331   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:25.960513   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:26.585937   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:26.599470   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:26.599543   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:26.635421   60933 cri.go:89] found id: ""
	I1216 21:01:26.635446   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.635455   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:26.635461   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:26.635527   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:26.675347   60933 cri.go:89] found id: ""
	I1216 21:01:26.675379   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.675390   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:26.675397   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:26.675464   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:26.715444   60933 cri.go:89] found id: ""
	I1216 21:01:26.715469   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.715480   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:26.715541   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:26.715619   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:26.753841   60933 cri.go:89] found id: ""
	I1216 21:01:26.753874   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.753893   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:26.753901   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:26.753963   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:26.791427   60933 cri.go:89] found id: ""
	I1216 21:01:26.791453   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.791464   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:26.791473   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:26.791539   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:26.832772   60933 cri.go:89] found id: ""
	I1216 21:01:26.832804   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.832816   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:26.832823   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:26.832887   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:26.869963   60933 cri.go:89] found id: ""
	I1216 21:01:26.869990   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.869997   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:26.870003   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:26.870068   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:26.906792   60933 cri.go:89] found id: ""
	I1216 21:01:26.906821   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.906862   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:26.906875   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:26.906894   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:26.994820   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:26.994863   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:27.034642   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:27.034686   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:27.089128   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:27.089168   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:27.104368   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:27.104401   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:27.179852   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:25.436696   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:27.937229   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:26.319568   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:28.820132   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:28.454880   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:30.455734   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:29.681052   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:29.695376   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:29.695464   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:29.735562   60933 cri.go:89] found id: ""
	I1216 21:01:29.735588   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.735596   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:29.735602   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:29.735650   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:29.772635   60933 cri.go:89] found id: ""
	I1216 21:01:29.772663   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.772672   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:29.772678   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:29.772737   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:29.810471   60933 cri.go:89] found id: ""
	I1216 21:01:29.810499   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.810509   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:29.810516   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:29.810575   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:29.845917   60933 cri.go:89] found id: ""
	I1216 21:01:29.845952   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.845966   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:29.845975   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:29.846048   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:29.883866   60933 cri.go:89] found id: ""
	I1216 21:01:29.883892   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.883900   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:29.883906   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:29.883968   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:29.920696   60933 cri.go:89] found id: ""
	I1216 21:01:29.920729   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.920740   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:29.920748   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:29.920831   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:29.957977   60933 cri.go:89] found id: ""
	I1216 21:01:29.958056   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.958069   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:29.958079   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:29.958144   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:29.995436   60933 cri.go:89] found id: ""
	I1216 21:01:29.995464   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.995472   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:29.995481   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:29.995492   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:30.046819   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:30.046859   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:30.062754   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:30.062807   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:30.138932   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:30.138959   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:30.138975   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:30.225720   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:30.225768   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:32.768185   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:32.782642   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:32.782729   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:32.821995   60933 cri.go:89] found id: ""
	I1216 21:01:32.822029   60933 logs.go:282] 0 containers: []
	W1216 21:01:32.822040   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:32.822048   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:32.822112   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:32.858453   60933 cri.go:89] found id: ""
	I1216 21:01:32.858487   60933 logs.go:282] 0 containers: []
	W1216 21:01:32.858497   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:32.858504   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:32.858570   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:32.896269   60933 cri.go:89] found id: ""
	I1216 21:01:32.896304   60933 logs.go:282] 0 containers: []
	W1216 21:01:32.896316   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:32.896323   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:32.896384   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:32.936795   60933 cri.go:89] found id: ""
	I1216 21:01:32.936820   60933 logs.go:282] 0 containers: []
	W1216 21:01:32.936832   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:32.936838   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:32.936904   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:32.974779   60933 cri.go:89] found id: ""
	I1216 21:01:32.974810   60933 logs.go:282] 0 containers: []
	W1216 21:01:32.974821   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:32.974828   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:32.974892   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:33.012201   60933 cri.go:89] found id: ""
	I1216 21:01:33.012226   60933 logs.go:282] 0 containers: []
	W1216 21:01:33.012234   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:33.012239   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:33.012287   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:33.049777   60933 cri.go:89] found id: ""
	I1216 21:01:33.049803   60933 logs.go:282] 0 containers: []
	W1216 21:01:33.049811   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:33.049816   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:33.049873   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:33.087820   60933 cri.go:89] found id: ""
	I1216 21:01:33.087851   60933 logs.go:282] 0 containers: []
	W1216 21:01:33.087859   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:33.087870   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:33.087885   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:33.140816   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:33.140854   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:33.154817   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:33.154855   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:33.231445   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:33.231474   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:33.231496   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:33.311547   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:33.311586   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:29.938045   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:32.436934   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:34.444209   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:31.321180   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:33.324091   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:32.956028   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:35.454994   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:37.455094   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:35.855686   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:35.870404   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:35.870485   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:35.908175   60933 cri.go:89] found id: ""
	I1216 21:01:35.908204   60933 logs.go:282] 0 containers: []
	W1216 21:01:35.908215   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:35.908222   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:35.908284   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:35.955456   60933 cri.go:89] found id: ""
	I1216 21:01:35.955483   60933 logs.go:282] 0 containers: []
	W1216 21:01:35.955494   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:35.955501   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:35.955562   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:35.995170   60933 cri.go:89] found id: ""
	I1216 21:01:35.995201   60933 logs.go:282] 0 containers: []
	W1216 21:01:35.995211   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:35.995218   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:35.995305   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:36.033729   60933 cri.go:89] found id: ""
	I1216 21:01:36.033758   60933 logs.go:282] 0 containers: []
	W1216 21:01:36.033769   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:36.033776   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:36.033840   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:36.072756   60933 cri.go:89] found id: ""
	I1216 21:01:36.072787   60933 logs.go:282] 0 containers: []
	W1216 21:01:36.072799   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:36.072806   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:36.072873   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:36.112149   60933 cri.go:89] found id: ""
	I1216 21:01:36.112187   60933 logs.go:282] 0 containers: []
	W1216 21:01:36.112198   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:36.112205   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:36.112258   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:36.148742   60933 cri.go:89] found id: ""
	I1216 21:01:36.148770   60933 logs.go:282] 0 containers: []
	W1216 21:01:36.148781   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:36.148789   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:36.148855   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:36.192827   60933 cri.go:89] found id: ""
	I1216 21:01:36.192864   60933 logs.go:282] 0 containers: []
	W1216 21:01:36.192875   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:36.192886   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:36.192901   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:36.243822   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:36.243867   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:36.258258   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:36.258292   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:36.342847   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:36.342876   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:36.342891   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:36.424741   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:36.424780   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:38.967334   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:38.982208   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:38.982283   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:39.023903   60933 cri.go:89] found id: ""
	I1216 21:01:39.023931   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.023939   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:39.023945   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:39.023997   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:39.070314   60933 cri.go:89] found id: ""
	I1216 21:01:39.070342   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.070351   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:39.070359   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:39.070423   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:39.115081   60933 cri.go:89] found id: ""
	I1216 21:01:39.115106   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.115113   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:39.115119   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:39.115178   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:39.151933   60933 cri.go:89] found id: ""
	I1216 21:01:39.151959   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.151967   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:39.151972   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:39.152022   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:39.192280   60933 cri.go:89] found id: ""
	I1216 21:01:39.192307   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.192315   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:39.192322   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:39.192370   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:39.228792   60933 cri.go:89] found id: ""
	I1216 21:01:39.228814   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.228822   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:39.228827   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:39.228887   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:39.266823   60933 cri.go:89] found id: ""
	I1216 21:01:39.266847   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.266854   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:39.266860   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:39.266908   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:39.301317   60933 cri.go:89] found id: ""
	I1216 21:01:39.301340   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.301348   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:39.301361   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:39.301372   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:39.386615   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:39.386663   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:36.936376   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:38.936968   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:35.820025   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:37.820396   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:40.319915   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:39.457790   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:41.955758   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:39.433079   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:39.433112   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:39.489422   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:39.489458   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:39.504223   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:39.504259   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:39.587898   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:42.088900   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:42.103768   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:42.103854   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:42.141956   60933 cri.go:89] found id: ""
	I1216 21:01:42.142026   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.142040   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:42.142049   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:42.142117   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:42.178754   60933 cri.go:89] found id: ""
	I1216 21:01:42.178782   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.178818   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:42.178833   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:42.178891   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:42.215872   60933 cri.go:89] found id: ""
	I1216 21:01:42.215905   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.215916   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:42.215923   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:42.215991   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:42.253854   60933 cri.go:89] found id: ""
	I1216 21:01:42.253885   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.253896   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:42.253904   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:42.253972   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:42.290963   60933 cri.go:89] found id: ""
	I1216 21:01:42.291008   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.291023   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:42.291039   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:42.291109   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:42.332920   60933 cri.go:89] found id: ""
	I1216 21:01:42.332946   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.332953   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:42.332959   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:42.333006   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:42.375060   60933 cri.go:89] found id: ""
	I1216 21:01:42.375093   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.375104   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:42.375112   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:42.375189   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:42.416593   60933 cri.go:89] found id: ""
	I1216 21:01:42.416621   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.416631   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:42.416639   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:42.416651   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:42.475204   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:42.475260   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:42.491022   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:42.491057   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:42.566645   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:42.566672   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:42.566687   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:42.646815   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:42.646856   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:41.436872   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:43.936734   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:42.321709   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:44.321985   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:43.955807   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:46.455508   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:45.191912   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:45.205487   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:45.205548   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:45.245350   60933 cri.go:89] found id: ""
	I1216 21:01:45.245389   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.245397   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:45.245404   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:45.245482   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:45.302126   60933 cri.go:89] found id: ""
	I1216 21:01:45.302158   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.302171   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:45.302178   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:45.302251   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:45.342888   60933 cri.go:89] found id: ""
	I1216 21:01:45.342917   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.342932   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:45.342937   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:45.342990   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:45.381545   60933 cri.go:89] found id: ""
	I1216 21:01:45.381574   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.381585   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:45.381593   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:45.381652   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:45.418081   60933 cri.go:89] found id: ""
	I1216 21:01:45.418118   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.418131   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:45.418138   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:45.418207   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:45.458610   60933 cri.go:89] found id: ""
	I1216 21:01:45.458637   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.458647   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:45.458655   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:45.458713   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:45.500102   60933 cri.go:89] found id: ""
	I1216 21:01:45.500137   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.500148   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:45.500155   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:45.500217   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:45.542074   60933 cri.go:89] found id: ""
	I1216 21:01:45.542103   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.542113   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:45.542122   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:45.542134   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:45.597577   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:45.597614   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:45.614028   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:45.614075   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:45.693014   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:45.693039   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:45.693056   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:45.772260   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:45.772295   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:48.317073   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:48.332176   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:48.332242   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:48.369946   60933 cri.go:89] found id: ""
	I1216 21:01:48.369976   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.369988   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:48.369994   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:48.370059   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:48.407628   60933 cri.go:89] found id: ""
	I1216 21:01:48.407661   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.407672   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:48.407680   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:48.407742   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:48.444377   60933 cri.go:89] found id: ""
	I1216 21:01:48.444403   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.444411   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:48.444416   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:48.444467   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:48.485674   60933 cri.go:89] found id: ""
	I1216 21:01:48.485710   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.485722   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:48.485730   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:48.485785   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:48.530577   60933 cri.go:89] found id: ""
	I1216 21:01:48.530610   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.530621   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:48.530628   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:48.530693   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:48.567128   60933 cri.go:89] found id: ""
	I1216 21:01:48.567151   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.567159   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:48.567165   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:48.567216   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:48.603294   60933 cri.go:89] found id: ""
	I1216 21:01:48.603320   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.603327   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:48.603333   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:48.603392   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:48.646221   60933 cri.go:89] found id: ""
	I1216 21:01:48.646253   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.646265   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:48.646288   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:48.646318   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:48.697589   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:48.697624   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:48.711916   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:48.711947   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:48.789068   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:48.789097   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:48.789113   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:48.872340   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:48.872378   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:45.937806   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:48.437160   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:46.819986   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:48.821079   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:48.456975   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:50.956101   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:51.418176   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:51.434851   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:51.434948   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:51.478935   60933 cri.go:89] found id: ""
	I1216 21:01:51.478963   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.478975   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:51.478982   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:51.479043   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:51.524581   60933 cri.go:89] found id: ""
	I1216 21:01:51.524611   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.524622   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:51.524629   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:51.524686   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:51.563479   60933 cri.go:89] found id: ""
	I1216 21:01:51.563507   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.563516   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:51.563521   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:51.563578   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:51.601931   60933 cri.go:89] found id: ""
	I1216 21:01:51.601964   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.601975   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:51.601982   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:51.602044   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:51.638984   60933 cri.go:89] found id: ""
	I1216 21:01:51.639014   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.639025   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:51.639032   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:51.639093   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:51.681137   60933 cri.go:89] found id: ""
	I1216 21:01:51.681167   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.681178   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:51.681185   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:51.681263   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:51.722904   60933 cri.go:89] found id: ""
	I1216 21:01:51.722932   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.722941   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:51.722946   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:51.722994   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:51.794403   60933 cri.go:89] found id: ""
	I1216 21:01:51.794434   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.794444   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:51.794453   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:51.794464   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:51.850688   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:51.850724   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:51.866049   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:51.866079   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:51.949844   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:51.949880   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:51.949894   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:52.028981   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:52.029023   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:50.936202   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:52.936839   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:51.321959   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:53.819864   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:53.455360   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:55.954957   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:54.570192   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:54.585405   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:54.585489   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:54.627670   60933 cri.go:89] found id: ""
	I1216 21:01:54.627701   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.627712   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:54.627719   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:54.627782   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:54.671226   60933 cri.go:89] found id: ""
	I1216 21:01:54.671265   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.671276   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:54.671283   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:54.671337   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:54.705549   60933 cri.go:89] found id: ""
	I1216 21:01:54.705581   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.705592   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:54.705600   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:54.705663   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:54.743638   60933 cri.go:89] found id: ""
	I1216 21:01:54.743664   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.743671   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:54.743677   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:54.743728   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:54.781714   60933 cri.go:89] found id: ""
	I1216 21:01:54.781750   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.781760   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:54.781767   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:54.781831   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:54.830808   60933 cri.go:89] found id: ""
	I1216 21:01:54.830840   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.830851   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:54.830859   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:54.830923   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:54.868539   60933 cri.go:89] found id: ""
	I1216 21:01:54.868565   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.868573   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:54.868578   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:54.868626   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:54.906554   60933 cri.go:89] found id: ""
	I1216 21:01:54.906587   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.906595   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:54.906604   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:54.906617   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:54.960664   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:54.960696   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:54.975657   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:54.975686   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:55.052266   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:55.052293   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:55.052320   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:55.137894   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:55.137937   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:57.682769   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:57.699102   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:57.699184   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:57.764651   60933 cri.go:89] found id: ""
	I1216 21:01:57.764684   60933 logs.go:282] 0 containers: []
	W1216 21:01:57.764692   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:57.764698   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:57.764755   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:57.805358   60933 cri.go:89] found id: ""
	I1216 21:01:57.805385   60933 logs.go:282] 0 containers: []
	W1216 21:01:57.805395   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:57.805404   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:57.805474   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:57.843589   60933 cri.go:89] found id: ""
	I1216 21:01:57.843623   60933 logs.go:282] 0 containers: []
	W1216 21:01:57.843634   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:57.843644   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:57.843716   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:57.881725   60933 cri.go:89] found id: ""
	I1216 21:01:57.881748   60933 logs.go:282] 0 containers: []
	W1216 21:01:57.881756   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:57.881761   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:57.881811   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:57.922252   60933 cri.go:89] found id: ""
	I1216 21:01:57.922293   60933 logs.go:282] 0 containers: []
	W1216 21:01:57.922305   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:57.922322   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:57.922385   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:57.962532   60933 cri.go:89] found id: ""
	I1216 21:01:57.962555   60933 logs.go:282] 0 containers: []
	W1216 21:01:57.962562   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:57.962567   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:57.962615   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:58.002021   60933 cri.go:89] found id: ""
	I1216 21:01:58.002056   60933 logs.go:282] 0 containers: []
	W1216 21:01:58.002067   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:58.002074   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:58.002137   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:58.035648   60933 cri.go:89] found id: ""
	I1216 21:01:58.035672   60933 logs.go:282] 0 containers: []
	W1216 21:01:58.035680   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:58.035688   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:58.035699   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:58.116142   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:58.116177   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:58.157683   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:58.157717   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:58.211686   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:58.211722   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:58.226385   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:58.226409   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:58.302287   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:54.937208   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:57.437396   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:59.438489   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:56.326836   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:58.818671   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:57.955980   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:00.455212   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:00.802544   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:00.816325   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:00.816405   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:00.853031   60933 cri.go:89] found id: ""
	I1216 21:02:00.853057   60933 logs.go:282] 0 containers: []
	W1216 21:02:00.853065   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:00.853070   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:00.853122   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:00.891040   60933 cri.go:89] found id: ""
	I1216 21:02:00.891071   60933 logs.go:282] 0 containers: []
	W1216 21:02:00.891082   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:00.891089   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:00.891151   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:00.929145   60933 cri.go:89] found id: ""
	I1216 21:02:00.929168   60933 logs.go:282] 0 containers: []
	W1216 21:02:00.929175   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:00.929181   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:00.929227   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:00.976469   60933 cri.go:89] found id: ""
	I1216 21:02:00.976492   60933 logs.go:282] 0 containers: []
	W1216 21:02:00.976500   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:00.976505   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:00.976553   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:01.015053   60933 cri.go:89] found id: ""
	I1216 21:02:01.015078   60933 logs.go:282] 0 containers: []
	W1216 21:02:01.015086   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:01.015092   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:01.015150   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:01.052859   60933 cri.go:89] found id: ""
	I1216 21:02:01.052891   60933 logs.go:282] 0 containers: []
	W1216 21:02:01.052902   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:01.052909   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:01.053028   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:01.091209   60933 cri.go:89] found id: ""
	I1216 21:02:01.091238   60933 logs.go:282] 0 containers: []
	W1216 21:02:01.091259   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:01.091266   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:01.091341   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:01.127013   60933 cri.go:89] found id: ""
	I1216 21:02:01.127038   60933 logs.go:282] 0 containers: []
	W1216 21:02:01.127047   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:01.127058   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:01.127072   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:01.179642   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:01.179697   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:01.196390   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:01.196416   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:01.275446   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:01.275478   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:01.275493   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:01.354391   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:01.354429   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:03.897672   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:03.911596   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:03.911654   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:03.955700   60933 cri.go:89] found id: ""
	I1216 21:02:03.955726   60933 logs.go:282] 0 containers: []
	W1216 21:02:03.955735   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:03.955741   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:03.955803   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:03.995661   60933 cri.go:89] found id: ""
	I1216 21:02:03.995696   60933 logs.go:282] 0 containers: []
	W1216 21:02:03.995706   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:03.995713   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:03.995772   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:04.031368   60933 cri.go:89] found id: ""
	I1216 21:02:04.031391   60933 logs.go:282] 0 containers: []
	W1216 21:02:04.031398   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:04.031406   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:04.031455   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:04.067633   60933 cri.go:89] found id: ""
	I1216 21:02:04.067659   60933 logs.go:282] 0 containers: []
	W1216 21:02:04.067666   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:04.067671   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:04.067719   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:04.105734   60933 cri.go:89] found id: ""
	I1216 21:02:04.105758   60933 logs.go:282] 0 containers: []
	W1216 21:02:04.105768   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:04.105773   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:04.105824   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:04.146542   60933 cri.go:89] found id: ""
	I1216 21:02:04.146564   60933 logs.go:282] 0 containers: []
	W1216 21:02:04.146571   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:04.146577   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:04.146623   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:04.184433   60933 cri.go:89] found id: ""
	I1216 21:02:04.184462   60933 logs.go:282] 0 containers: []
	W1216 21:02:04.184473   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:04.184480   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:04.184551   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:04.223077   60933 cri.go:89] found id: ""
	I1216 21:02:04.223106   60933 logs.go:282] 0 containers: []
	W1216 21:02:04.223117   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:04.223127   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:04.223140   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:04.279618   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:04.279656   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:04.295841   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:04.295865   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:04.372609   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:04.372632   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:04.372648   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:01.937175   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:03.937249   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:00.819801   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:02.820563   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:05.320087   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:02.955461   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:05.455023   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:07.456981   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:04.457597   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:04.457631   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:07.006004   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:07.020394   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:07.020537   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:07.064242   60933 cri.go:89] found id: ""
	I1216 21:02:07.064274   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.064283   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:07.064289   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:07.064337   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:07.108865   60933 cri.go:89] found id: ""
	I1216 21:02:07.108899   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.108910   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:07.108917   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:07.108985   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:07.149021   60933 cri.go:89] found id: ""
	I1216 21:02:07.149051   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.149060   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:07.149066   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:07.149120   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:07.187808   60933 cri.go:89] found id: ""
	I1216 21:02:07.187833   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.187843   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:07.187850   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:07.187912   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:07.228748   60933 cri.go:89] found id: ""
	I1216 21:02:07.228774   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.228785   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:07.228792   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:07.228853   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:07.267961   60933 cri.go:89] found id: ""
	I1216 21:02:07.267996   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.268012   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:07.268021   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:07.268099   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:07.312464   60933 cri.go:89] found id: ""
	I1216 21:02:07.312491   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.312498   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:07.312503   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:07.312554   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:07.351902   60933 cri.go:89] found id: ""
	I1216 21:02:07.351933   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.351946   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:07.351958   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:07.351974   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:07.405985   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:07.406050   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:07.420796   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:07.420842   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:07.506527   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:07.506559   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:07.506574   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:07.587965   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:07.588001   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:06.437434   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:08.937843   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:07.320229   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:09.819940   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:09.954900   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:11.955004   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:10.132876   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:10.146785   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:10.146858   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:10.189278   60933 cri.go:89] found id: ""
	I1216 21:02:10.189312   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.189324   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:10.189332   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:10.189402   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:10.228331   60933 cri.go:89] found id: ""
	I1216 21:02:10.228370   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.228378   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:10.228383   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:10.228436   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:10.266424   60933 cri.go:89] found id: ""
	I1216 21:02:10.266458   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.266470   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:10.266478   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:10.266542   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:10.305865   60933 cri.go:89] found id: ""
	I1216 21:02:10.305890   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.305902   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:10.305909   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:10.305968   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:10.344211   60933 cri.go:89] found id: ""
	I1216 21:02:10.344239   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.344247   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:10.344253   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:10.344314   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:10.381939   60933 cri.go:89] found id: ""
	I1216 21:02:10.381993   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.382004   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:10.382011   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:10.382076   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:10.418882   60933 cri.go:89] found id: ""
	I1216 21:02:10.418908   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.418915   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:10.418921   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:10.418972   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:10.458397   60933 cri.go:89] found id: ""
	I1216 21:02:10.458425   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.458434   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:10.458447   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:10.458462   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:10.472152   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:10.472180   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:10.545888   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:10.545913   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:10.545926   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:10.627223   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:10.627293   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:10.676606   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:10.676633   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:13.227283   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:13.242871   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:13.242954   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:13.280676   60933 cri.go:89] found id: ""
	I1216 21:02:13.280711   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.280723   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:13.280731   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:13.280786   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:13.321357   60933 cri.go:89] found id: ""
	I1216 21:02:13.321389   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.321400   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:13.321408   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:13.321474   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:13.359002   60933 cri.go:89] found id: ""
	I1216 21:02:13.359030   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.359042   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:13.359050   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:13.359116   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:13.395879   60933 cri.go:89] found id: ""
	I1216 21:02:13.395922   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.395941   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:13.395950   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:13.396017   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:13.436761   60933 cri.go:89] found id: ""
	I1216 21:02:13.436781   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.436788   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:13.436793   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:13.436852   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:13.478839   60933 cri.go:89] found id: ""
	I1216 21:02:13.478869   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.478877   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:13.478883   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:13.478947   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:13.520013   60933 cri.go:89] found id: ""
	I1216 21:02:13.520037   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.520044   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:13.520050   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:13.520124   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:13.556973   60933 cri.go:89] found id: ""
	I1216 21:02:13.557001   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.557013   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:13.557023   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:13.557039   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:13.613499   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:13.613537   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:13.628689   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:13.628724   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:13.706556   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:13.706576   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:13.706589   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:13.786379   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:13.786419   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:11.436179   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:13.436800   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:11.820109   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:13.820778   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:14.457666   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:16.955591   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:16.333578   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:16.347948   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:16.348020   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:16.386928   60933 cri.go:89] found id: ""
	I1216 21:02:16.386955   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.386963   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:16.386969   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:16.387033   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:16.425192   60933 cri.go:89] found id: ""
	I1216 21:02:16.425253   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.425265   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:16.425273   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:16.425355   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:16.465522   60933 cri.go:89] found id: ""
	I1216 21:02:16.465554   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.465565   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:16.465573   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:16.465638   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:16.504567   60933 cri.go:89] found id: ""
	I1216 21:02:16.504605   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.504616   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:16.504624   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:16.504694   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:16.541823   60933 cri.go:89] found id: ""
	I1216 21:02:16.541852   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.541864   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:16.541872   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:16.541942   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:16.580898   60933 cri.go:89] found id: ""
	I1216 21:02:16.580927   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.580938   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:16.580946   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:16.581003   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:16.626006   60933 cri.go:89] found id: ""
	I1216 21:02:16.626036   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.626046   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:16.626053   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:16.626109   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:16.662686   60933 cri.go:89] found id: ""
	I1216 21:02:16.662712   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.662719   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:16.662728   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:16.662740   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:16.717939   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:16.717978   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:16.733431   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:16.733466   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:16.807379   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:16.807409   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:16.807421   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:16.896455   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:16.896492   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:15.437791   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:17.935778   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:16.321167   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:18.819624   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:18.955621   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:20.956220   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:19.442959   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:19.458684   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:19.458749   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:19.499907   60933 cri.go:89] found id: ""
	I1216 21:02:19.499938   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.499947   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:19.499954   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:19.500002   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:19.538010   60933 cri.go:89] found id: ""
	I1216 21:02:19.538035   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.538043   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:19.538049   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:19.538148   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:19.577097   60933 cri.go:89] found id: ""
	I1216 21:02:19.577131   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.577139   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:19.577145   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:19.577196   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:19.617288   60933 cri.go:89] found id: ""
	I1216 21:02:19.617316   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.617326   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:19.617332   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:19.617392   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:19.658066   60933 cri.go:89] found id: ""
	I1216 21:02:19.658090   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.658097   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:19.658103   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:19.658153   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:19.696077   60933 cri.go:89] found id: ""
	I1216 21:02:19.696108   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.696121   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:19.696131   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:19.696189   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:19.737657   60933 cri.go:89] found id: ""
	I1216 21:02:19.737692   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.737704   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:19.737712   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:19.737776   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:19.778699   60933 cri.go:89] found id: ""
	I1216 21:02:19.778729   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.778738   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:19.778746   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:19.778757   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:19.841941   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:19.841979   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:19.857752   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:19.857788   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:19.935980   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:19.936004   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:19.936020   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:20.019999   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:20.020046   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:22.566398   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:22.580376   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:22.580472   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:22.620240   60933 cri.go:89] found id: ""
	I1216 21:02:22.620273   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.620284   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:22.620292   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:22.620355   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:22.656413   60933 cri.go:89] found id: ""
	I1216 21:02:22.656444   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.656455   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:22.656463   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:22.656531   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:22.690956   60933 cri.go:89] found id: ""
	I1216 21:02:22.690978   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.690986   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:22.690992   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:22.691040   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:22.734851   60933 cri.go:89] found id: ""
	I1216 21:02:22.734885   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.734895   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:22.734903   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:22.734969   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:22.774416   60933 cri.go:89] found id: ""
	I1216 21:02:22.774450   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.774461   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:22.774467   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:22.774535   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:22.811162   60933 cri.go:89] found id: ""
	I1216 21:02:22.811192   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.811204   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:22.811212   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:22.811296   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:22.851955   60933 cri.go:89] found id: ""
	I1216 21:02:22.851980   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.851987   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:22.851993   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:22.852051   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:22.888699   60933 cri.go:89] found id: ""
	I1216 21:02:22.888725   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.888736   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:22.888747   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:22.888769   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:22.944065   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:22.944100   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:22.960842   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:22.960872   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:23.036229   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:23.036251   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:23.036263   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:23.122493   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:23.122535   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:19.936687   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:21.937222   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:24.437190   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:20.820544   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:22.820771   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:25.319776   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:22.956523   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:25.456180   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:25.667995   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:25.682152   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:25.682222   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:25.719092   60933 cri.go:89] found id: ""
	I1216 21:02:25.719120   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.719130   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:25.719135   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:25.719190   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:25.757668   60933 cri.go:89] found id: ""
	I1216 21:02:25.757702   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.757712   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:25.757720   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:25.757791   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:25.809743   60933 cri.go:89] found id: ""
	I1216 21:02:25.809776   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.809787   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:25.809795   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:25.809857   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:25.849181   60933 cri.go:89] found id: ""
	I1216 21:02:25.849211   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.849222   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:25.849230   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:25.849295   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:25.891032   60933 cri.go:89] found id: ""
	I1216 21:02:25.891079   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.891091   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:25.891098   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:25.891169   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:25.930549   60933 cri.go:89] found id: ""
	I1216 21:02:25.930575   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.930583   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:25.930589   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:25.930639   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:25.971709   60933 cri.go:89] found id: ""
	I1216 21:02:25.971736   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.971744   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:25.971749   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:25.971797   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:26.007728   60933 cri.go:89] found id: ""
	I1216 21:02:26.007760   60933 logs.go:282] 0 containers: []
	W1216 21:02:26.007769   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:26.007778   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:26.007791   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:26.059710   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:26.059752   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:26.074596   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:26.074627   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:26.145892   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:26.145913   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:26.145924   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:26.225961   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:26.226000   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:28.772974   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:28.787001   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:28.787078   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:28.828176   60933 cri.go:89] found id: ""
	I1216 21:02:28.828206   60933 logs.go:282] 0 containers: []
	W1216 21:02:28.828214   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:28.828223   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:28.828292   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:28.872750   60933 cri.go:89] found id: ""
	I1216 21:02:28.872781   60933 logs.go:282] 0 containers: []
	W1216 21:02:28.872792   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:28.872798   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:28.872859   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:28.914844   60933 cri.go:89] found id: ""
	I1216 21:02:28.914871   60933 logs.go:282] 0 containers: []
	W1216 21:02:28.914879   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:28.914884   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:28.914934   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:28.953541   60933 cri.go:89] found id: ""
	I1216 21:02:28.953569   60933 logs.go:282] 0 containers: []
	W1216 21:02:28.953579   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:28.953587   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:28.953647   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:28.992768   60933 cri.go:89] found id: ""
	I1216 21:02:28.992797   60933 logs.go:282] 0 containers: []
	W1216 21:02:28.992808   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:28.992816   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:28.992882   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:29.030069   60933 cri.go:89] found id: ""
	I1216 21:02:29.030102   60933 logs.go:282] 0 containers: []
	W1216 21:02:29.030113   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:29.030121   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:29.030187   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:29.068629   60933 cri.go:89] found id: ""
	I1216 21:02:29.068658   60933 logs.go:282] 0 containers: []
	W1216 21:02:29.068666   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:29.068677   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:29.068726   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:29.103664   60933 cri.go:89] found id: ""
	I1216 21:02:29.103697   60933 logs.go:282] 0 containers: []
	W1216 21:02:29.103708   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:29.103719   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:29.103732   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:29.151225   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:29.151276   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:29.209448   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:29.209499   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:29.225232   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:29.225257   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:29.309812   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:29.309832   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:29.309846   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:26.937193   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:28.937302   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:27.320052   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:29.820220   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:27.956244   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:29.957111   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:32.456969   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:31.896263   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:31.912378   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:31.912455   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:31.950479   60933 cri.go:89] found id: ""
	I1216 21:02:31.950508   60933 logs.go:282] 0 containers: []
	W1216 21:02:31.950527   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:31.950535   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:31.950600   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:31.990479   60933 cri.go:89] found id: ""
	I1216 21:02:31.990504   60933 logs.go:282] 0 containers: []
	W1216 21:02:31.990515   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:31.990533   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:31.990599   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:32.032808   60933 cri.go:89] found id: ""
	I1216 21:02:32.032834   60933 logs.go:282] 0 containers: []
	W1216 21:02:32.032843   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:32.032853   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:32.032913   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:32.069719   60933 cri.go:89] found id: ""
	I1216 21:02:32.069748   60933 logs.go:282] 0 containers: []
	W1216 21:02:32.069759   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:32.069772   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:32.069830   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:32.106652   60933 cri.go:89] found id: ""
	I1216 21:02:32.106685   60933 logs.go:282] 0 containers: []
	W1216 21:02:32.106694   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:32.106701   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:32.106767   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:32.145921   60933 cri.go:89] found id: ""
	I1216 21:02:32.145949   60933 logs.go:282] 0 containers: []
	W1216 21:02:32.145957   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:32.145963   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:32.146014   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:32.206313   60933 cri.go:89] found id: ""
	I1216 21:02:32.206342   60933 logs.go:282] 0 containers: []
	W1216 21:02:32.206351   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:32.206356   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:32.206410   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:32.262757   60933 cri.go:89] found id: ""
	I1216 21:02:32.262794   60933 logs.go:282] 0 containers: []
	W1216 21:02:32.262806   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:32.262818   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:32.262832   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:32.320221   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:32.320251   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:32.375395   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:32.375437   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:32.391103   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:32.391137   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:32.474709   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:32.474741   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:32.474757   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:31.436689   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:33.436921   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:32.320631   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:34.819726   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:34.956369   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:37.455577   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:35.058809   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:35.073074   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:35.073157   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:35.115280   60933 cri.go:89] found id: ""
	I1216 21:02:35.115305   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.115312   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:35.115318   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:35.115378   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:35.151561   60933 cri.go:89] found id: ""
	I1216 21:02:35.151589   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.151597   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:35.151603   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:35.151654   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:35.192061   60933 cri.go:89] found id: ""
	I1216 21:02:35.192088   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.192095   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:35.192111   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:35.192161   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:35.231493   60933 cri.go:89] found id: ""
	I1216 21:02:35.231523   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.231531   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:35.231538   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:35.231586   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:35.271236   60933 cri.go:89] found id: ""
	I1216 21:02:35.271291   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.271300   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:35.271306   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:35.271368   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:35.309950   60933 cri.go:89] found id: ""
	I1216 21:02:35.309980   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.309991   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:35.309999   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:35.310062   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:35.347762   60933 cri.go:89] found id: ""
	I1216 21:02:35.347790   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.347797   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:35.347803   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:35.347851   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:35.390732   60933 cri.go:89] found id: ""
	I1216 21:02:35.390757   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.390765   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:35.390774   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:35.390785   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:35.447068   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:35.447112   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:35.462873   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:35.462904   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:35.541120   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:35.541145   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:35.541162   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:35.627073   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:35.627120   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:38.170994   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:38.194371   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:38.194434   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:38.248023   60933 cri.go:89] found id: ""
	I1216 21:02:38.248050   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.248061   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:38.248069   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:38.248147   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:38.300143   60933 cri.go:89] found id: ""
	I1216 21:02:38.300175   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.300185   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:38.300193   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:38.300253   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:38.345273   60933 cri.go:89] found id: ""
	I1216 21:02:38.345300   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.345308   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:38.345314   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:38.345389   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:38.383032   60933 cri.go:89] found id: ""
	I1216 21:02:38.383066   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.383075   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:38.383081   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:38.383135   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:38.426042   60933 cri.go:89] found id: ""
	I1216 21:02:38.426074   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.426086   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:38.426094   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:38.426159   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:38.467596   60933 cri.go:89] found id: ""
	I1216 21:02:38.467625   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.467634   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:38.467640   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:38.467692   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:38.509340   60933 cri.go:89] found id: ""
	I1216 21:02:38.509380   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.509391   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:38.509399   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:38.509470   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:38.549306   60933 cri.go:89] found id: ""
	I1216 21:02:38.549337   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.549354   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:38.549365   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:38.549381   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:38.564091   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:38.564131   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:38.639173   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:38.639201   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:38.639219   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:38.716320   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:38.716376   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:38.756779   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:38.756815   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:35.437230   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:37.938595   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:36.820302   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:39.319712   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:39.954558   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:41.955761   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:41.310680   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:41.327606   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:41.327684   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:41.371622   60933 cri.go:89] found id: ""
	I1216 21:02:41.371657   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.371670   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:41.371679   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:41.371739   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:41.408149   60933 cri.go:89] found id: ""
	I1216 21:02:41.408187   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.408198   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:41.408203   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:41.408252   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:41.448445   60933 cri.go:89] found id: ""
	I1216 21:02:41.448471   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.448478   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:41.448484   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:41.448533   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:41.489957   60933 cri.go:89] found id: ""
	I1216 21:02:41.489989   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.490000   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:41.490007   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:41.490069   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:41.532891   60933 cri.go:89] found id: ""
	I1216 21:02:41.532918   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.532930   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:41.532937   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:41.532992   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:41.570315   60933 cri.go:89] found id: ""
	I1216 21:02:41.570342   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.570351   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:41.570357   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:41.570455   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:41.606833   60933 cri.go:89] found id: ""
	I1216 21:02:41.606867   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.606880   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:41.606890   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:41.606959   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:41.643862   60933 cri.go:89] found id: ""
	I1216 21:02:41.643886   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.643894   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:41.643902   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:41.643914   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:41.657621   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:41.657654   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:41.732256   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:41.732281   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:41.732295   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:41.822045   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:41.822081   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:41.863900   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:41.863933   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:40.436149   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:42.436247   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:44.436916   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:41.321155   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:43.819721   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:43.956057   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:46.455802   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:44.425154   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:44.440148   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:44.440223   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:44.478216   60933 cri.go:89] found id: ""
	I1216 21:02:44.478247   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.478258   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:44.478266   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:44.478329   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:44.517054   60933 cri.go:89] found id: ""
	I1216 21:02:44.517078   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.517084   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:44.517090   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:44.517137   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:44.554683   60933 cri.go:89] found id: ""
	I1216 21:02:44.554778   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.554801   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:44.554845   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:44.554927   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:44.600748   60933 cri.go:89] found id: ""
	I1216 21:02:44.600788   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.600800   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:44.600809   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:44.600863   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:44.637564   60933 cri.go:89] found id: ""
	I1216 21:02:44.637592   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.637600   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:44.637606   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:44.637656   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:44.676619   60933 cri.go:89] found id: ""
	I1216 21:02:44.676662   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.676674   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:44.676683   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:44.676755   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:44.715920   60933 cri.go:89] found id: ""
	I1216 21:02:44.715956   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.715964   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:44.715970   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:44.716027   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:44.755134   60933 cri.go:89] found id: ""
	I1216 21:02:44.755167   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.755179   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:44.755191   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:44.755202   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:44.796135   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:44.796164   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:44.850550   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:44.850593   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:44.865278   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:44.865305   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:44.942987   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:44.943013   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:44.943026   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:47.529850   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:47.546292   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:47.546369   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:47.589597   60933 cri.go:89] found id: ""
	I1216 21:02:47.589627   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.589640   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:47.589648   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:47.589713   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:47.630998   60933 cri.go:89] found id: ""
	I1216 21:02:47.631030   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.631043   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:47.631051   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:47.631118   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:47.670118   60933 cri.go:89] found id: ""
	I1216 21:02:47.670150   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.670162   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:47.670169   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:47.670233   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:47.714516   60933 cri.go:89] found id: ""
	I1216 21:02:47.714549   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.714560   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:47.714568   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:47.714631   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:47.752042   60933 cri.go:89] found id: ""
	I1216 21:02:47.752074   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.752086   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:47.752093   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:47.752158   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:47.793612   60933 cri.go:89] found id: ""
	I1216 21:02:47.793645   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.793656   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:47.793664   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:47.793734   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:47.833489   60933 cri.go:89] found id: ""
	I1216 21:02:47.833518   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.833529   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:47.833541   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:47.833602   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:47.869744   60933 cri.go:89] found id: ""
	I1216 21:02:47.869772   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.869783   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:47.869793   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:47.869809   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:47.910640   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:47.910674   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:47.965747   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:47.965781   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:47.979760   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:47.979786   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:48.056887   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:48.056917   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:48.056933   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:46.439409   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:48.937248   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:46.320935   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:48.820700   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:48.955697   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:50.955859   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:50.641224   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:50.657267   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:50.657346   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:50.696890   60933 cri.go:89] found id: ""
	I1216 21:02:50.696916   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.696924   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:50.696930   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:50.696993   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:50.734485   60933 cri.go:89] found id: ""
	I1216 21:02:50.734514   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.734524   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:50.734533   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:50.734598   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:50.776241   60933 cri.go:89] found id: ""
	I1216 21:02:50.776268   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.776277   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:50.776283   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:50.776358   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:50.816449   60933 cri.go:89] found id: ""
	I1216 21:02:50.816482   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.816493   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:50.816501   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:50.816561   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:50.857458   60933 cri.go:89] found id: ""
	I1216 21:02:50.857481   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.857488   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:50.857494   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:50.857556   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:50.895367   60933 cri.go:89] found id: ""
	I1216 21:02:50.895391   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.895398   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:50.895404   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:50.895466   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:50.934101   60933 cri.go:89] found id: ""
	I1216 21:02:50.934128   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.934138   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:50.934152   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:50.934212   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:50.978625   60933 cri.go:89] found id: ""
	I1216 21:02:50.978654   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.978665   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:50.978675   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:50.978688   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:51.061867   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:51.061908   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:51.101188   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:51.101228   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:51.157426   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:51.157470   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:51.172835   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:51.172882   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:51.247678   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:53.748503   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:53.763357   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:53.763425   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:53.807963   60933 cri.go:89] found id: ""
	I1216 21:02:53.807990   60933 logs.go:282] 0 containers: []
	W1216 21:02:53.807999   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:53.808005   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:53.808063   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:53.846840   60933 cri.go:89] found id: ""
	I1216 21:02:53.846867   60933 logs.go:282] 0 containers: []
	W1216 21:02:53.846876   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:53.846881   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:53.846929   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:53.885099   60933 cri.go:89] found id: ""
	I1216 21:02:53.885131   60933 logs.go:282] 0 containers: []
	W1216 21:02:53.885146   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:53.885156   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:53.885226   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:53.923859   60933 cri.go:89] found id: ""
	I1216 21:02:53.923890   60933 logs.go:282] 0 containers: []
	W1216 21:02:53.923901   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:53.923908   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:53.923972   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:53.964150   60933 cri.go:89] found id: ""
	I1216 21:02:53.964176   60933 logs.go:282] 0 containers: []
	W1216 21:02:53.964186   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:53.964201   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:53.964265   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:54.004676   60933 cri.go:89] found id: ""
	I1216 21:02:54.004707   60933 logs.go:282] 0 containers: []
	W1216 21:02:54.004718   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:54.004725   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:54.004789   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:54.042560   60933 cri.go:89] found id: ""
	I1216 21:02:54.042585   60933 logs.go:282] 0 containers: []
	W1216 21:02:54.042595   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:54.042603   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:54.042666   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:54.081002   60933 cri.go:89] found id: ""
	I1216 21:02:54.081030   60933 logs.go:282] 0 containers: []
	W1216 21:02:54.081038   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:54.081046   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:54.081058   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:54.132825   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:54.132865   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:54.147793   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:54.147821   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:54.226668   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:54.226692   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:54.226704   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:54.307792   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:54.307832   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:50.938230   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:53.436746   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:50.820949   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:53.320283   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:52.957187   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:54.958212   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:57.456612   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:56.852207   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:56.866404   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:56.866469   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:56.911786   60933 cri.go:89] found id: ""
	I1216 21:02:56.911811   60933 logs.go:282] 0 containers: []
	W1216 21:02:56.911820   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:56.911829   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:56.911886   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:56.953491   60933 cri.go:89] found id: ""
	I1216 21:02:56.953520   60933 logs.go:282] 0 containers: []
	W1216 21:02:56.953535   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:56.953543   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:56.953610   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:56.991569   60933 cri.go:89] found id: ""
	I1216 21:02:56.991605   60933 logs.go:282] 0 containers: []
	W1216 21:02:56.991616   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:56.991622   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:56.991685   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:57.026808   60933 cri.go:89] found id: ""
	I1216 21:02:57.026837   60933 logs.go:282] 0 containers: []
	W1216 21:02:57.026845   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:57.026851   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:57.026913   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:57.065539   60933 cri.go:89] found id: ""
	I1216 21:02:57.065569   60933 logs.go:282] 0 containers: []
	W1216 21:02:57.065577   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:57.065583   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:57.065642   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:57.103911   60933 cri.go:89] found id: ""
	I1216 21:02:57.103942   60933 logs.go:282] 0 containers: []
	W1216 21:02:57.103952   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:57.103960   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:57.104015   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:57.141177   60933 cri.go:89] found id: ""
	I1216 21:02:57.141200   60933 logs.go:282] 0 containers: []
	W1216 21:02:57.141207   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:57.141213   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:57.141262   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:57.178532   60933 cri.go:89] found id: ""
	I1216 21:02:57.178590   60933 logs.go:282] 0 containers: []
	W1216 21:02:57.178604   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:57.178614   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:57.178629   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:57.234811   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:57.234846   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:57.251540   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:57.251569   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:57.329029   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:57.329061   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:57.329077   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:57.412624   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:57.412665   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:55.436981   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:57.438061   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:55.819607   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:57.819648   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:59.820705   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:59.955043   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:01.956284   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:59.960422   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:59.974889   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:59.974966   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:00.012641   60933 cri.go:89] found id: ""
	I1216 21:03:00.012669   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.012676   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:00.012682   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:00.012730   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:00.053730   60933 cri.go:89] found id: ""
	I1216 21:03:00.053766   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.053778   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:00.053785   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:00.053847   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:00.091213   60933 cri.go:89] found id: ""
	I1216 21:03:00.091261   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.091274   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:00.091283   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:00.091357   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:00.131357   60933 cri.go:89] found id: ""
	I1216 21:03:00.131382   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.131390   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:00.131396   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:00.131460   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:00.168331   60933 cri.go:89] found id: ""
	I1216 21:03:00.168362   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.168373   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:00.168380   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:00.168446   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:00.208326   60933 cri.go:89] found id: ""
	I1216 21:03:00.208360   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.208369   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:00.208377   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:00.208440   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:00.245775   60933 cri.go:89] found id: ""
	I1216 21:03:00.245800   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.245808   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:00.245814   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:00.245863   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:00.283062   60933 cri.go:89] found id: ""
	I1216 21:03:00.283091   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.283100   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:00.283108   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:00.283119   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:00.358767   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:00.358787   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:00.358799   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:00.443422   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:00.443460   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:00.491511   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:00.491551   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:00.566131   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:00.566172   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:03.080319   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:03.094733   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:03.094818   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:03.132388   60933 cri.go:89] found id: ""
	I1216 21:03:03.132419   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.132428   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:03.132433   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:03.132488   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:03.172345   60933 cri.go:89] found id: ""
	I1216 21:03:03.172374   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.172386   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:03.172393   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:03.172474   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:03.210444   60933 cri.go:89] found id: ""
	I1216 21:03:03.210479   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.210488   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:03.210494   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:03.210544   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:03.248605   60933 cri.go:89] found id: ""
	I1216 21:03:03.248644   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.248656   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:03.248664   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:03.248723   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:03.286822   60933 cri.go:89] found id: ""
	I1216 21:03:03.286854   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.286862   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:03.286868   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:03.286921   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:03.329304   60933 cri.go:89] found id: ""
	I1216 21:03:03.329333   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.329344   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:03.329352   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:03.329417   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:03.367337   60933 cri.go:89] found id: ""
	I1216 21:03:03.367361   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.367368   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:03.367373   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:03.367420   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:03.409799   60933 cri.go:89] found id: ""
	I1216 21:03:03.409821   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.409829   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:03.409838   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:03.409850   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:03.466941   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:03.466976   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:03.483090   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:03.483117   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:03.566835   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:03.566860   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:03.566878   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:03.649747   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:03.649793   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:59.936221   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:01.936251   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:03.936714   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:02.319063   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:04.319653   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:03.956397   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:05.956531   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:06.193505   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:06.207797   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:06.207878   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:06.245401   60933 cri.go:89] found id: ""
	I1216 21:03:06.245437   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.245448   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:06.245456   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:06.245521   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:06.301205   60933 cri.go:89] found id: ""
	I1216 21:03:06.301239   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.301250   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:06.301257   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:06.301326   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:06.340325   60933 cri.go:89] found id: ""
	I1216 21:03:06.340352   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.340362   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:06.340369   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:06.340429   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:06.378321   60933 cri.go:89] found id: ""
	I1216 21:03:06.378351   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.378359   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:06.378365   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:06.378422   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:06.416354   60933 cri.go:89] found id: ""
	I1216 21:03:06.416390   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.416401   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:06.416409   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:06.416473   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:06.459926   60933 cri.go:89] found id: ""
	I1216 21:03:06.459955   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.459967   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:06.459975   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:06.460063   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:06.501818   60933 cri.go:89] found id: ""
	I1216 21:03:06.501849   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.501860   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:06.501866   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:06.501926   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:06.537552   60933 cri.go:89] found id: ""
	I1216 21:03:06.537583   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.537598   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:06.537607   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:06.537621   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:06.592170   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:06.592212   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:06.607148   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:06.607183   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:06.676114   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:06.676140   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:06.676151   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:06.756009   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:06.756052   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:09.298166   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:09.313104   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:09.313189   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:09.356598   60933 cri.go:89] found id: ""
	I1216 21:03:09.356625   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.356640   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:09.356649   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:09.356715   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:05.937241   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:07.938858   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:06.322260   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:08.818974   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:08.455838   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:10.955332   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:09.395406   60933 cri.go:89] found id: ""
	I1216 21:03:09.395439   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.395449   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:09.395456   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:09.395521   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:09.440401   60933 cri.go:89] found id: ""
	I1216 21:03:09.440423   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.440430   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:09.440435   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:09.440504   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:09.478798   60933 cri.go:89] found id: ""
	I1216 21:03:09.478828   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.478843   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:09.478853   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:09.478921   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:09.515542   60933 cri.go:89] found id: ""
	I1216 21:03:09.515575   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.515587   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:09.515596   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:09.515654   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:09.554150   60933 cri.go:89] found id: ""
	I1216 21:03:09.554183   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.554194   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:09.554205   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:09.554279   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:09.591699   60933 cri.go:89] found id: ""
	I1216 21:03:09.591730   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.591740   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:09.591747   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:09.591811   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:09.629938   60933 cri.go:89] found id: ""
	I1216 21:03:09.629970   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.629980   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:09.629991   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:09.630008   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:09.711255   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:09.711284   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:09.711300   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:09.790202   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:09.790243   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:09.839567   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:09.839597   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:09.893010   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:09.893050   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:12.409934   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:12.423715   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:12.423789   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:12.461995   60933 cri.go:89] found id: ""
	I1216 21:03:12.462038   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.462046   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:12.462052   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:12.462101   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:12.501738   60933 cri.go:89] found id: ""
	I1216 21:03:12.501769   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.501779   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:12.501785   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:12.501833   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:12.541758   60933 cri.go:89] found id: ""
	I1216 21:03:12.541785   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.541795   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:12.541802   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:12.541850   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:12.579173   60933 cri.go:89] found id: ""
	I1216 21:03:12.579199   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.579206   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:12.579212   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:12.579302   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:12.624382   60933 cri.go:89] found id: ""
	I1216 21:03:12.624407   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.624418   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:12.624426   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:12.624488   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:12.665139   60933 cri.go:89] found id: ""
	I1216 21:03:12.665178   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.665190   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:12.665200   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:12.665274   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:12.711586   60933 cri.go:89] found id: ""
	I1216 21:03:12.711611   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.711619   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:12.711627   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:12.711678   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:12.761566   60933 cri.go:89] found id: ""
	I1216 21:03:12.761600   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.761612   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:12.761624   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:12.761640   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:12.824282   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:12.824315   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:12.839335   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:12.839371   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:12.918317   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:12.918341   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:12.918357   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:13.000375   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:13.000410   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:10.438136   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:12.936742   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:11.319284   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:13.320036   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:15.322965   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:12.955450   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:14.956186   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:16.956603   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:15.542372   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:15.556877   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:15.556960   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:15.599345   60933 cri.go:89] found id: ""
	I1216 21:03:15.599378   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.599389   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:15.599414   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:15.599479   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:15.642072   60933 cri.go:89] found id: ""
	I1216 21:03:15.642106   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.642116   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:15.642124   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:15.642189   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:15.679989   60933 cri.go:89] found id: ""
	I1216 21:03:15.680025   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.680036   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:15.680044   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:15.680103   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:15.718343   60933 cri.go:89] found id: ""
	I1216 21:03:15.718371   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.718378   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:15.718384   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:15.718433   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:15.759937   60933 cri.go:89] found id: ""
	I1216 21:03:15.759971   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.759981   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:15.759988   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:15.760081   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:15.801434   60933 cri.go:89] found id: ""
	I1216 21:03:15.801463   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.801471   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:15.801477   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:15.801540   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:15.841855   60933 cri.go:89] found id: ""
	I1216 21:03:15.841879   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.841886   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:15.841892   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:15.841962   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:15.883951   60933 cri.go:89] found id: ""
	I1216 21:03:15.883974   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.883982   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:15.883990   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:15.884004   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:15.960868   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:15.960902   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:16.005700   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:16.005730   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:16.061128   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:16.061165   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:16.075601   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:16.075630   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:16.147810   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:18.648677   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:18.663298   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:18.663367   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:18.713281   60933 cri.go:89] found id: ""
	I1216 21:03:18.713313   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.713324   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:18.713332   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:18.713396   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:18.764861   60933 cri.go:89] found id: ""
	I1216 21:03:18.764892   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.764905   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:18.764912   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:18.764978   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:18.816140   60933 cri.go:89] found id: ""
	I1216 21:03:18.816170   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.816180   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:18.816188   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:18.816251   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:18.852118   60933 cri.go:89] found id: ""
	I1216 21:03:18.852151   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.852163   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:18.852171   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:18.852235   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:18.887996   60933 cri.go:89] found id: ""
	I1216 21:03:18.888018   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.888025   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:18.888031   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:18.888089   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:18.925415   60933 cri.go:89] found id: ""
	I1216 21:03:18.925437   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.925445   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:18.925451   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:18.925498   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:18.964853   60933 cri.go:89] found id: ""
	I1216 21:03:18.964884   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.964892   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:18.964897   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:18.964964   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:19.000822   60933 cri.go:89] found id: ""
	I1216 21:03:19.000848   60933 logs.go:282] 0 containers: []
	W1216 21:03:19.000856   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:19.000865   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:19.000879   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:19.051571   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:19.051612   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:19.066737   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:19.066767   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:19.143120   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:19.143144   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:19.143156   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:19.229811   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:19.229850   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:15.437189   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:17.439345   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:17.820374   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:19.820460   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:19.455707   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:21.955275   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:21.776440   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:21.792869   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:21.792951   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:21.831100   60933 cri.go:89] found id: ""
	I1216 21:03:21.831127   60933 logs.go:282] 0 containers: []
	W1216 21:03:21.831134   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:21.831140   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:21.831196   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:21.869124   60933 cri.go:89] found id: ""
	I1216 21:03:21.869147   60933 logs.go:282] 0 containers: []
	W1216 21:03:21.869155   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:21.869160   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:21.869215   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:21.909891   60933 cri.go:89] found id: ""
	I1216 21:03:21.909926   60933 logs.go:282] 0 containers: []
	W1216 21:03:21.909938   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:21.909946   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:21.910032   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:21.949140   60933 cri.go:89] found id: ""
	I1216 21:03:21.949169   60933 logs.go:282] 0 containers: []
	W1216 21:03:21.949179   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:21.949186   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:21.949245   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:21.987741   60933 cri.go:89] found id: ""
	I1216 21:03:21.987771   60933 logs.go:282] 0 containers: []
	W1216 21:03:21.987780   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:21.987785   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:21.987839   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:22.025565   60933 cri.go:89] found id: ""
	I1216 21:03:22.025593   60933 logs.go:282] 0 containers: []
	W1216 21:03:22.025601   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:22.025607   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:22.025659   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:22.062076   60933 cri.go:89] found id: ""
	I1216 21:03:22.062110   60933 logs.go:282] 0 containers: []
	W1216 21:03:22.062120   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:22.062127   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:22.062198   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:22.102037   60933 cri.go:89] found id: ""
	I1216 21:03:22.102065   60933 logs.go:282] 0 containers: []
	W1216 21:03:22.102093   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:22.102105   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:22.102122   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:22.159185   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:22.159219   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:22.175139   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:22.175168   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:22.255769   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:22.255801   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:22.255817   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:22.339633   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:22.339681   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:19.937328   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:22.435709   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:24.436704   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:22.319227   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:24.819278   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:24.455668   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:26.956382   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:24.883865   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:24.898198   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:24.898287   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:24.939472   60933 cri.go:89] found id: ""
	I1216 21:03:24.939500   60933 logs.go:282] 0 containers: []
	W1216 21:03:24.939511   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:24.939518   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:24.939583   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:24.981798   60933 cri.go:89] found id: ""
	I1216 21:03:24.981822   60933 logs.go:282] 0 containers: []
	W1216 21:03:24.981829   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:24.981834   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:24.981889   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:25.021332   60933 cri.go:89] found id: ""
	I1216 21:03:25.021366   60933 logs.go:282] 0 containers: []
	W1216 21:03:25.021373   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:25.021379   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:25.021431   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:25.057811   60933 cri.go:89] found id: ""
	I1216 21:03:25.057836   60933 logs.go:282] 0 containers: []
	W1216 21:03:25.057843   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:25.057848   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:25.057907   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:25.093852   60933 cri.go:89] found id: ""
	I1216 21:03:25.093881   60933 logs.go:282] 0 containers: []
	W1216 21:03:25.093890   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:25.093895   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:25.093945   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:25.132779   60933 cri.go:89] found id: ""
	I1216 21:03:25.132813   60933 logs.go:282] 0 containers: []
	W1216 21:03:25.132825   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:25.132834   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:25.132912   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:25.173942   60933 cri.go:89] found id: ""
	I1216 21:03:25.173967   60933 logs.go:282] 0 containers: []
	W1216 21:03:25.173974   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:25.173990   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:25.174048   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:25.213105   60933 cri.go:89] found id: ""
	I1216 21:03:25.213127   60933 logs.go:282] 0 containers: []
	W1216 21:03:25.213135   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:25.213144   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:25.213155   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:25.267517   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:25.267557   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:25.284144   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:25.284177   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:25.362901   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:25.362931   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:25.362947   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:25.450193   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:25.450227   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:27.995716   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:28.012044   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:28.012138   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:28.050404   60933 cri.go:89] found id: ""
	I1216 21:03:28.050432   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.050441   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:28.050446   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:28.050492   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:28.087830   60933 cri.go:89] found id: ""
	I1216 21:03:28.087855   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.087862   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:28.087885   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:28.087933   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:28.125122   60933 cri.go:89] found id: ""
	I1216 21:03:28.125147   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.125154   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:28.125160   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:28.125233   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:28.160619   60933 cri.go:89] found id: ""
	I1216 21:03:28.160646   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.160655   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:28.160661   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:28.160726   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:28.198951   60933 cri.go:89] found id: ""
	I1216 21:03:28.198977   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.198986   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:28.198993   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:28.199059   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:28.236596   60933 cri.go:89] found id: ""
	I1216 21:03:28.236621   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.236629   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:28.236635   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:28.236707   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:28.273955   60933 cri.go:89] found id: ""
	I1216 21:03:28.273979   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.273986   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:28.273992   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:28.274061   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:28.311908   60933 cri.go:89] found id: ""
	I1216 21:03:28.311943   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.311954   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:28.311965   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:28.311979   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:28.363870   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:28.363910   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:28.379919   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:28.379945   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:28.459998   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:28.460019   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:28.460030   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:28.543229   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:28.543306   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:26.936661   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:29.437169   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:26.820563   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:29.319981   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:28.956791   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:31.456708   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:31.086525   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:31.100833   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:31.100950   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:31.141356   60933 cri.go:89] found id: ""
	I1216 21:03:31.141385   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.141396   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:31.141403   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:31.141465   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:31.176609   60933 cri.go:89] found id: ""
	I1216 21:03:31.176641   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.176650   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:31.176657   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:31.176721   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:31.213959   60933 cri.go:89] found id: ""
	I1216 21:03:31.213984   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.213991   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:31.213997   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:31.214058   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:31.255183   60933 cri.go:89] found id: ""
	I1216 21:03:31.255208   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.255215   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:31.255220   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:31.255297   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:31.293475   60933 cri.go:89] found id: ""
	I1216 21:03:31.293501   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.293508   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:31.293514   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:31.293561   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:31.332010   60933 cri.go:89] found id: ""
	I1216 21:03:31.332041   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.332052   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:31.332061   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:31.332119   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:31.370301   60933 cri.go:89] found id: ""
	I1216 21:03:31.370331   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.370342   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:31.370349   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:31.370414   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:31.419526   60933 cri.go:89] found id: ""
	I1216 21:03:31.419553   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.419561   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:31.419570   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:31.419583   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:31.480125   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:31.480160   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:31.495464   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:31.495497   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:31.570747   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:31.570773   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:31.570788   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:31.651521   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:31.651564   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:34.200969   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:34.216519   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:34.216596   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:34.254185   60933 cri.go:89] found id: ""
	I1216 21:03:34.254218   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.254227   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:34.254242   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:34.254312   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:34.293194   60933 cri.go:89] found id: ""
	I1216 21:03:34.293225   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.293236   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:34.293242   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:34.293297   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:34.335002   60933 cri.go:89] found id: ""
	I1216 21:03:34.335030   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.335042   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:34.335050   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:34.335112   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:34.370854   60933 cri.go:89] found id: ""
	I1216 21:03:34.370880   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.370887   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:34.370893   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:34.370938   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:31.439597   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:33.935941   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:31.820337   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:33.820497   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:33.955185   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:36.455713   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:34.409155   60933 cri.go:89] found id: ""
	I1216 21:03:34.409181   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.409189   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:34.409195   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:34.409256   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:34.448555   60933 cri.go:89] found id: ""
	I1216 21:03:34.448583   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.448594   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:34.448601   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:34.448663   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:34.486800   60933 cri.go:89] found id: ""
	I1216 21:03:34.486829   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.486842   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:34.486851   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:34.486919   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:34.530274   60933 cri.go:89] found id: ""
	I1216 21:03:34.530299   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.530307   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:34.530317   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:34.530335   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:34.601587   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:34.601620   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:34.601637   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:34.680215   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:34.680250   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:34.721362   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:34.721389   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:34.776652   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:34.776693   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:37.292877   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:37.306976   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:37.307060   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:37.349370   60933 cri.go:89] found id: ""
	I1216 21:03:37.349405   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.349416   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:37.349424   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:37.349486   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:37.387213   60933 cri.go:89] found id: ""
	I1216 21:03:37.387271   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.387285   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:37.387294   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:37.387361   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:37.427138   60933 cri.go:89] found id: ""
	I1216 21:03:37.427164   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.427175   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:37.427182   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:37.427269   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:37.466751   60933 cri.go:89] found id: ""
	I1216 21:03:37.466776   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.466783   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:37.466788   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:37.466846   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:37.505078   60933 cri.go:89] found id: ""
	I1216 21:03:37.505115   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.505123   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:37.505128   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:37.505189   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:37.548642   60933 cri.go:89] found id: ""
	I1216 21:03:37.548665   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.548673   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:37.548679   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:37.548738   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:37.592354   60933 cri.go:89] found id: ""
	I1216 21:03:37.592379   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.592386   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:37.592391   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:37.592441   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:37.631179   60933 cri.go:89] found id: ""
	I1216 21:03:37.631212   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.631221   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:37.631230   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:37.631261   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:37.683021   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:37.683062   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:37.698056   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:37.698087   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:37.774368   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:37.774397   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:37.774422   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:37.860470   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:37.860511   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:35.936409   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:37.936652   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:36.319436   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:38.819727   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:38.456251   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:40.957354   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:40.405278   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:40.420390   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:40.420473   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:40.463963   60933 cri.go:89] found id: ""
	I1216 21:03:40.463994   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.464033   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:40.464041   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:40.464107   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:40.510321   60933 cri.go:89] found id: ""
	I1216 21:03:40.510352   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.510369   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:40.510376   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:40.510441   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:40.546580   60933 cri.go:89] found id: ""
	I1216 21:03:40.546610   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.546619   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:40.546624   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:40.546686   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:40.583109   60933 cri.go:89] found id: ""
	I1216 21:03:40.583136   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.583144   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:40.583149   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:40.583202   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:40.628747   60933 cri.go:89] found id: ""
	I1216 21:03:40.628771   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.628778   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:40.628784   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:40.628845   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:40.663757   60933 cri.go:89] found id: ""
	I1216 21:03:40.663785   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.663796   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:40.663804   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:40.663867   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:40.703483   60933 cri.go:89] found id: ""
	I1216 21:03:40.703513   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.703522   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:40.703528   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:40.703592   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:40.742585   60933 cri.go:89] found id: ""
	I1216 21:03:40.742622   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.742632   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:40.742641   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:40.742653   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:40.757771   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:40.757809   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:40.837615   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:40.837642   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:40.837656   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:40.915403   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:40.915442   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:40.960762   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:40.960790   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:43.515302   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:43.530831   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:43.530906   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:43.571680   60933 cri.go:89] found id: ""
	I1216 21:03:43.571704   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.571712   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:43.571718   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:43.571779   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:43.615912   60933 cri.go:89] found id: ""
	I1216 21:03:43.615940   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.615948   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:43.615955   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:43.616013   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:43.654206   60933 cri.go:89] found id: ""
	I1216 21:03:43.654231   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.654241   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:43.654249   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:43.654309   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:43.690509   60933 cri.go:89] found id: ""
	I1216 21:03:43.690533   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.690541   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:43.690548   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:43.690595   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:43.728601   60933 cri.go:89] found id: ""
	I1216 21:03:43.728627   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.728634   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:43.728639   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:43.728685   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:43.769092   60933 cri.go:89] found id: ""
	I1216 21:03:43.769130   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.769198   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:43.769215   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:43.769292   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:43.812492   60933 cri.go:89] found id: ""
	I1216 21:03:43.812525   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.812537   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:43.812544   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:43.812613   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:43.852748   60933 cri.go:89] found id: ""
	I1216 21:03:43.852778   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.852787   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:43.852795   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:43.852807   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:43.907800   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:43.907839   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:43.922806   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:43.922833   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:44.002511   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:44.002538   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:44.002551   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:44.081760   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:44.081801   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:40.437134   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:42.437214   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:40.820244   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:43.321298   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:43.455891   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:45.456281   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:46.625868   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:46.640266   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:46.640341   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:46.677137   60933 cri.go:89] found id: ""
	I1216 21:03:46.677168   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.677179   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:46.677185   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:46.677241   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:46.714340   60933 cri.go:89] found id: ""
	I1216 21:03:46.714373   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.714382   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:46.714389   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:46.714449   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:46.752713   60933 cri.go:89] found id: ""
	I1216 21:03:46.752743   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.752754   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:46.752763   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:46.752827   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:46.790787   60933 cri.go:89] found id: ""
	I1216 21:03:46.790821   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.790837   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:46.790845   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:46.790902   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:46.827905   60933 cri.go:89] found id: ""
	I1216 21:03:46.827934   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.827945   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:46.827954   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:46.828023   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:46.863522   60933 cri.go:89] found id: ""
	I1216 21:03:46.863547   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.863563   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:46.863570   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:46.863634   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:46.906005   60933 cri.go:89] found id: ""
	I1216 21:03:46.906035   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.906044   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:46.906049   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:46.906103   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:46.947639   60933 cri.go:89] found id: ""
	I1216 21:03:46.947668   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.947679   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:46.947691   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:46.947706   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:47.001693   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:47.001732   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:47.023122   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:47.023166   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:47.108257   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:47.108291   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:47.108303   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:47.184768   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:47.184807   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:44.940074   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:47.437155   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:45.819943   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:47.820443   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:49.820700   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:47.955794   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:49.960595   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:52.455630   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:49.729433   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:49.743836   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:49.743903   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:49.783021   60933 cri.go:89] found id: ""
	I1216 21:03:49.783054   60933 logs.go:282] 0 containers: []
	W1216 21:03:49.783066   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:49.783074   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:49.783144   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:49.820371   60933 cri.go:89] found id: ""
	I1216 21:03:49.820399   60933 logs.go:282] 0 containers: []
	W1216 21:03:49.820409   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:49.820416   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:49.820476   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:49.857918   60933 cri.go:89] found id: ""
	I1216 21:03:49.857948   60933 logs.go:282] 0 containers: []
	W1216 21:03:49.857959   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:49.857967   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:49.858033   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:49.899517   60933 cri.go:89] found id: ""
	I1216 21:03:49.899548   60933 logs.go:282] 0 containers: []
	W1216 21:03:49.899558   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:49.899565   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:49.899632   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:49.938771   60933 cri.go:89] found id: ""
	I1216 21:03:49.938797   60933 logs.go:282] 0 containers: []
	W1216 21:03:49.938805   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:49.938810   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:49.938857   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:49.975748   60933 cri.go:89] found id: ""
	I1216 21:03:49.975781   60933 logs.go:282] 0 containers: []
	W1216 21:03:49.975792   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:49.975800   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:49.975876   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:50.013057   60933 cri.go:89] found id: ""
	I1216 21:03:50.013082   60933 logs.go:282] 0 containers: []
	W1216 21:03:50.013090   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:50.013127   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:50.013178   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:50.049106   60933 cri.go:89] found id: ""
	I1216 21:03:50.049138   60933 logs.go:282] 0 containers: []
	W1216 21:03:50.049150   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:50.049161   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:50.049176   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:50.063815   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:50.063847   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:50.137801   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:50.137826   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:50.137841   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:50.218456   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:50.218495   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:50.263347   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:50.263379   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:52.824077   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:52.838096   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:52.838185   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:52.880550   60933 cri.go:89] found id: ""
	I1216 21:03:52.880582   60933 logs.go:282] 0 containers: []
	W1216 21:03:52.880593   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:52.880600   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:52.880658   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:52.919728   60933 cri.go:89] found id: ""
	I1216 21:03:52.919751   60933 logs.go:282] 0 containers: []
	W1216 21:03:52.919759   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:52.919764   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:52.919819   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:52.957519   60933 cri.go:89] found id: ""
	I1216 21:03:52.957542   60933 logs.go:282] 0 containers: []
	W1216 21:03:52.957549   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:52.957555   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:52.957607   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:52.996631   60933 cri.go:89] found id: ""
	I1216 21:03:52.996663   60933 logs.go:282] 0 containers: []
	W1216 21:03:52.996673   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:52.996681   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:52.996745   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:53.059902   60933 cri.go:89] found id: ""
	I1216 21:03:53.060014   60933 logs.go:282] 0 containers: []
	W1216 21:03:53.060030   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:53.060039   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:53.060105   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:53.099367   60933 cri.go:89] found id: ""
	I1216 21:03:53.099395   60933 logs.go:282] 0 containers: []
	W1216 21:03:53.099406   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:53.099419   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:53.099486   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:53.140668   60933 cri.go:89] found id: ""
	I1216 21:03:53.140696   60933 logs.go:282] 0 containers: []
	W1216 21:03:53.140704   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:53.140709   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:53.140777   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:53.179182   60933 cri.go:89] found id: ""
	I1216 21:03:53.179208   60933 logs.go:282] 0 containers: []
	W1216 21:03:53.179216   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:53.179225   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:53.179236   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:53.233441   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:53.233481   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:53.247526   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:53.247569   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:53.321868   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:53.321895   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:53.321911   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:53.410904   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:53.410959   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:49.936523   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:51.936955   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:54.441538   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:52.319658   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:54.319887   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:54.955490   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:57.456080   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:55.954371   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:55.968506   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:55.968570   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:56.005087   60933 cri.go:89] found id: ""
	I1216 21:03:56.005118   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.005130   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:56.005137   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:56.005205   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:56.039443   60933 cri.go:89] found id: ""
	I1216 21:03:56.039467   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.039475   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:56.039486   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:56.039537   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:56.078181   60933 cri.go:89] found id: ""
	I1216 21:03:56.078213   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.078224   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:56.078231   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:56.078289   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:56.115809   60933 cri.go:89] found id: ""
	I1216 21:03:56.115833   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.115841   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:56.115848   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:56.115901   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:56.154299   60933 cri.go:89] found id: ""
	I1216 21:03:56.154323   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.154330   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:56.154336   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:56.154395   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:56.193069   60933 cri.go:89] found id: ""
	I1216 21:03:56.193098   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.193106   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:56.193112   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:56.193161   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:56.231067   60933 cri.go:89] found id: ""
	I1216 21:03:56.231099   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.231118   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:56.231125   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:56.231191   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:56.270980   60933 cri.go:89] found id: ""
	I1216 21:03:56.271011   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.271022   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:56.271035   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:56.271050   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:56.321374   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:56.321405   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:56.336802   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:56.336847   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:56.414052   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:56.414078   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:56.414091   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:56.499118   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:56.499158   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:59.049386   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:59.063191   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:59.063300   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:59.102136   60933 cri.go:89] found id: ""
	I1216 21:03:59.102169   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.102180   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:59.102187   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:59.102255   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:59.138311   60933 cri.go:89] found id: ""
	I1216 21:03:59.138340   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.138357   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:59.138364   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:59.138431   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:59.176131   60933 cri.go:89] found id: ""
	I1216 21:03:59.176159   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.176169   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:59.176177   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:59.176259   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:59.214274   60933 cri.go:89] found id: ""
	I1216 21:03:59.214308   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.214320   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:59.214327   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:59.214397   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:59.254499   60933 cri.go:89] found id: ""
	I1216 21:03:59.254524   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.254531   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:59.254537   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:59.254602   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:59.292715   60933 cri.go:89] found id: ""
	I1216 21:03:59.292755   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.292765   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:59.292772   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:59.292836   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:59.333279   60933 cri.go:89] found id: ""
	I1216 21:03:59.333314   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.333325   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:59.333332   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:59.333404   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:59.372071   60933 cri.go:89] found id: ""
	I1216 21:03:59.372104   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.372116   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:59.372126   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:59.372143   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:59.389021   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:59.389051   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 21:03:56.936508   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:59.438217   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:56.323300   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:58.819599   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:59.456242   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:01.956873   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	W1216 21:03:59.503281   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:59.503304   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:59.503316   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:59.581761   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:59.581797   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:59.627604   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:59.627646   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:04:02.179425   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:04:02.195786   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:04:02.195850   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:04:02.239763   60933 cri.go:89] found id: ""
	I1216 21:04:02.239790   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.239801   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:04:02.239809   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:04:02.239873   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:04:02.278885   60933 cri.go:89] found id: ""
	I1216 21:04:02.278914   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.278926   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:04:02.278935   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:04:02.279004   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:04:02.320701   60933 cri.go:89] found id: ""
	I1216 21:04:02.320731   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.320742   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:04:02.320749   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:04:02.320811   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:04:02.357726   60933 cri.go:89] found id: ""
	I1216 21:04:02.357757   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.357767   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:04:02.357773   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:04:02.357826   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:04:02.399577   60933 cri.go:89] found id: ""
	I1216 21:04:02.399609   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.399618   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:04:02.399624   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:04:02.399687   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:04:02.445559   60933 cri.go:89] found id: ""
	I1216 21:04:02.445590   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.445600   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:04:02.445607   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:04:02.445670   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:04:02.482983   60933 cri.go:89] found id: ""
	I1216 21:04:02.483015   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.483027   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:04:02.483035   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:04:02.483116   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:04:02.523028   60933 cri.go:89] found id: ""
	I1216 21:04:02.523055   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.523063   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:04:02.523073   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:04:02.523084   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:04:02.577447   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:04:02.577487   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:04:02.594539   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:04:02.594567   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:04:02.683805   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:04:02.683832   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:04:02.683848   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:04:02.763377   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:04:02.763416   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:04:01.937214   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:04.436771   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:01.319860   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:03.320323   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:04.454654   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:06.456145   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:05.311029   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:04:05.328358   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:04:05.328438   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:04:05.367378   60933 cri.go:89] found id: ""
	I1216 21:04:05.367402   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.367409   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:04:05.367419   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:04:05.367468   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:04:05.406268   60933 cri.go:89] found id: ""
	I1216 21:04:05.406291   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.406301   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:04:05.406306   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:04:05.406353   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:04:05.444737   60933 cri.go:89] found id: ""
	I1216 21:04:05.444767   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.444778   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:04:05.444787   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:04:05.444836   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:04:05.484044   60933 cri.go:89] found id: ""
	I1216 21:04:05.484132   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.484153   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:04:05.484161   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:04:05.484222   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:04:05.523395   60933 cri.go:89] found id: ""
	I1216 21:04:05.523420   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.523431   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:04:05.523439   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:04:05.523501   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:04:05.566925   60933 cri.go:89] found id: ""
	I1216 21:04:05.566954   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.566967   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:04:05.566974   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:04:05.567036   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:04:05.611275   60933 cri.go:89] found id: ""
	I1216 21:04:05.611303   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.611314   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:04:05.611321   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:04:05.611396   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:04:05.650340   60933 cri.go:89] found id: ""
	I1216 21:04:05.650371   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.650379   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:04:05.650389   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:04:05.650400   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:04:05.702277   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:04:05.702321   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:04:05.718685   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:04:05.718713   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:04:05.794979   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:04:05.795005   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:04:05.795020   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:04:05.897348   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:04:05.897383   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:04:08.447268   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:04:08.462553   60933 kubeadm.go:597] duration metric: took 4m2.545161532s to restartPrimaryControlPlane
	W1216 21:04:08.462621   60933 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 21:04:08.462650   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 21:04:06.437699   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:08.936904   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:05.813413   60829 pod_ready.go:82] duration metric: took 4m0.000648161s for pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace to be "Ready" ...
	E1216 21:04:05.813448   60829 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace to be "Ready" (will not retry!)
	I1216 21:04:05.813472   60829 pod_ready.go:39] duration metric: took 4m14.577422135s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:04:05.813498   60829 kubeadm.go:597] duration metric: took 4m22.010606819s to restartPrimaryControlPlane
	W1216 21:04:05.813559   60829 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 21:04:05.813593   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 21:04:10.315541   60933 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.85286561s)
	I1216 21:04:10.315622   60933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:04:10.330937   60933 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 21:04:10.343702   60933 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:04:10.356498   60933 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:04:10.356526   60933 kubeadm.go:157] found existing configuration files:
	
	I1216 21:04:10.356579   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 21:04:10.367777   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:04:10.367847   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:04:10.379109   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 21:04:10.389258   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:04:10.389313   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:04:10.399959   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 21:04:10.410664   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:04:10.410734   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:04:10.423138   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 21:04:10.433922   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:04:10.433976   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:04:10.445297   60933 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 21:04:10.524236   60933 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1216 21:04:10.524344   60933 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 21:04:10.680331   60933 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 21:04:10.680489   60933 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 21:04:10.680641   60933 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1216 21:04:10.877305   60933 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 21:04:10.879375   60933 out.go:235]   - Generating certificates and keys ...
	I1216 21:04:10.879496   60933 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 21:04:10.879567   60933 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 21:04:10.879647   60933 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 21:04:10.879748   60933 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 21:04:10.879865   60933 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 21:04:10.880127   60933 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 21:04:10.881047   60933 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 21:04:10.881874   60933 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 21:04:10.882778   60933 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 21:04:10.883678   60933 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 21:04:10.884029   60933 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 21:04:10.884130   60933 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 21:04:11.034011   60933 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 21:04:11.273509   60933 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 21:04:11.477553   60933 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 21:04:11.542158   60933 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 21:04:11.565791   60933 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 21:04:11.567317   60933 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 21:04:11.567409   60933 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 21:04:11.763223   60933 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 21:04:08.955135   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:10.957061   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:11.766107   60933 out.go:235]   - Booting up control plane ...
	I1216 21:04:11.766257   60933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 21:04:11.766367   60933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 21:04:11.768484   60933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 21:04:11.773601   60933 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 21:04:11.780554   60933 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1216 21:04:11.436931   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:13.437532   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:13.455175   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:15.455370   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:17.456801   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:15.936107   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:17.937233   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:17.949449   60421 pod_ready.go:82] duration metric: took 4m0.000885381s for pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace to be "Ready" ...
	E1216 21:04:17.949484   60421 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace to be "Ready" (will not retry!)
	I1216 21:04:17.949501   60421 pod_ready.go:39] duration metric: took 4m10.554596731s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:04:17.949525   60421 kubeadm.go:597] duration metric: took 4m42.414672113s to restartPrimaryControlPlane
	W1216 21:04:17.949588   60421 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 21:04:17.949619   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 21:04:19.938104   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:22.436710   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:24.936550   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:26.936809   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:29.437478   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:33.833179   60829 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (28.019561403s)
	I1216 21:04:33.833265   60829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:04:33.850170   60829 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 21:04:33.862112   60829 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:04:33.873752   60829 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:04:33.873777   60829 kubeadm.go:157] found existing configuration files:
	
	I1216 21:04:33.873832   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1216 21:04:33.885038   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:04:33.885115   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:04:33.897352   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1216 21:04:33.911055   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:04:33.911115   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:04:33.925077   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1216 21:04:33.938925   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:04:33.938997   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:04:33.952022   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1216 21:04:33.963099   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:04:33.963176   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:04:33.974080   60829 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 21:04:34.031525   60829 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I1216 21:04:34.031643   60829 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 21:04:34.153173   60829 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 21:04:34.153340   60829 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 21:04:34.153453   60829 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 21:04:34.166258   60829 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 21:04:31.936620   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:33.938157   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:34.168275   60829 out.go:235]   - Generating certificates and keys ...
	I1216 21:04:34.168388   60829 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 21:04:34.168463   60829 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 21:04:34.168545   60829 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 21:04:34.168633   60829 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 21:04:34.168740   60829 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 21:04:34.168837   60829 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 21:04:34.168934   60829 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 21:04:34.169020   60829 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 21:04:34.169119   60829 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 21:04:34.169222   60829 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 21:04:34.169278   60829 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 21:04:34.169365   60829 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 21:04:34.277660   60829 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 21:04:34.526364   60829 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 21:04:34.629728   60829 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 21:04:34.757824   60829 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 21:04:34.838922   60829 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 21:04:34.839431   60829 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 21:04:34.841925   60829 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 21:04:34.843761   60829 out.go:235]   - Booting up control plane ...
	I1216 21:04:34.843874   60829 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 21:04:34.843945   60829 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 21:04:34.846919   60829 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 21:04:34.866038   60829 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 21:04:34.875031   60829 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 21:04:34.875112   60829 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 21:04:35.016713   60829 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 21:04:35.016879   60829 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 21:04:36.437043   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:38.437584   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:36.017947   60829 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001159452s
	I1216 21:04:36.018086   60829 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1216 21:04:40.519460   60829 kubeadm.go:310] [api-check] The API server is healthy after 4.501460025s
	I1216 21:04:40.533680   60829 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 21:04:40.552611   60829 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 21:04:40.585691   60829 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 21:04:40.585905   60829 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-327790 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 21:04:40.613752   60829 kubeadm.go:310] [bootstrap-token] Using token: w829op.p4bszg1q76emsxit
	I1216 21:04:40.615428   60829 out.go:235]   - Configuring RBAC rules ...
	I1216 21:04:40.615556   60829 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 21:04:40.629296   60829 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 21:04:40.638449   60829 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 21:04:40.644143   60829 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 21:04:40.648665   60829 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 21:04:40.653151   60829 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 21:04:40.926399   60829 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 21:04:41.370569   60829 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1216 21:04:41.927555   60829 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1216 21:04:41.928692   60829 kubeadm.go:310] 
	I1216 21:04:41.928769   60829 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1216 21:04:41.928779   60829 kubeadm.go:310] 
	I1216 21:04:41.928851   60829 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1216 21:04:41.928878   60829 kubeadm.go:310] 
	I1216 21:04:41.928928   60829 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1216 21:04:41.929005   60829 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 21:04:41.929053   60829 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 21:04:41.929060   60829 kubeadm.go:310] 
	I1216 21:04:41.929107   60829 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1216 21:04:41.929114   60829 kubeadm.go:310] 
	I1216 21:04:41.929151   60829 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 21:04:41.929157   60829 kubeadm.go:310] 
	I1216 21:04:41.929205   60829 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1216 21:04:41.929264   60829 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 21:04:41.929325   60829 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 21:04:41.929354   60829 kubeadm.go:310] 
	I1216 21:04:41.929527   60829 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 21:04:41.929657   60829 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1216 21:04:41.929674   60829 kubeadm.go:310] 
	I1216 21:04:41.929787   60829 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token w829op.p4bszg1q76emsxit \
	I1216 21:04:41.929941   60829 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e03b60b144334bf383a3d22daeca854a6b4004373f1847ba3afcb85a998b5735 \
	I1216 21:04:41.929975   60829 kubeadm.go:310] 	--control-plane 
	I1216 21:04:41.929984   60829 kubeadm.go:310] 
	I1216 21:04:41.930103   60829 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1216 21:04:41.930124   60829 kubeadm.go:310] 
	I1216 21:04:41.930245   60829 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token w829op.p4bszg1q76emsxit \
	I1216 21:04:41.930378   60829 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e03b60b144334bf383a3d22daeca854a6b4004373f1847ba3afcb85a998b5735 
	I1216 21:04:41.931554   60829 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 21:04:41.931685   60829 cni.go:84] Creating CNI manager for ""
	I1216 21:04:41.931699   60829 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:04:41.933748   60829 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 21:04:40.937882   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:43.436864   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:41.935317   60829 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 21:04:41.947502   60829 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 21:04:41.976180   60829 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 21:04:41.976288   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:41.976323   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-327790 minikube.k8s.io/updated_at=2024_12_16T21_04_41_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=74e51ab701402ddc00f8ba70f2a2775c7dcd6477 minikube.k8s.io/name=default-k8s-diff-port-327790 minikube.k8s.io/primary=true
	I1216 21:04:42.010154   60829 ops.go:34] apiserver oom_adj: -16
	I1216 21:04:42.181919   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:42.682201   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:43.182557   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:43.682418   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:44.182318   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:44.682793   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:45.182342   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:45.682678   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:45.777484   60829 kubeadm.go:1113] duration metric: took 3.801254961s to wait for elevateKubeSystemPrivileges
	I1216 21:04:45.777522   60829 kubeadm.go:394] duration metric: took 5m2.030533321s to StartCluster
	I1216 21:04:45.777543   60829 settings.go:142] acquiring lock: {Name:mke62e1d1fa6bfae09410847a3fc6f95d0bbbd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:04:45.777644   60829 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 21:04:45.780034   60829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/kubeconfig: {Name:mk67073c6dc9abd712825d4490d6430745897f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:04:45.780369   60829 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.162 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 21:04:45.780450   60829 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 21:04:45.780566   60829 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-327790"
	I1216 21:04:45.780579   60829 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-327790"
	I1216 21:04:45.780595   60829 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-327790"
	W1216 21:04:45.780606   60829 addons.go:243] addon storage-provisioner should already be in state true
	I1216 21:04:45.780599   60829 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-327790"
	I1216 21:04:45.780609   60829 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-327790"
	I1216 21:04:45.780627   60829 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-327790"
	I1216 21:04:45.780627   60829 config.go:182] Loaded profile config "default-k8s-diff-port-327790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	W1216 21:04:45.780638   60829 addons.go:243] addon metrics-server should already be in state true
	I1216 21:04:45.780648   60829 host.go:66] Checking if "default-k8s-diff-port-327790" exists ...
	I1216 21:04:45.780675   60829 host.go:66] Checking if "default-k8s-diff-port-327790" exists ...
	I1216 21:04:45.781091   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:45.781091   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:45.781132   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:45.781136   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:45.781174   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:45.781137   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:45.782022   60829 out.go:177] * Verifying Kubernetes components...
	I1216 21:04:45.783549   60829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 21:04:45.799326   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42295
	I1216 21:04:45.799443   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36833
	I1216 21:04:45.799865   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:45.800491   60829 main.go:141] libmachine: Using API Version  1
	I1216 21:04:45.800510   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:45.800588   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:45.801082   60829 main.go:141] libmachine: Using API Version  1
	I1216 21:04:45.801102   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:45.801178   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37413
	I1216 21:04:45.801202   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:45.801517   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:45.801539   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:45.801707   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetState
	I1216 21:04:45.801925   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:45.801959   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:45.801974   60829 main.go:141] libmachine: Using API Version  1
	I1216 21:04:45.801992   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:45.802319   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:45.802817   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:45.802857   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:45.805750   60829 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-327790"
	W1216 21:04:45.805775   60829 addons.go:243] addon default-storageclass should already be in state true
	I1216 21:04:45.805806   60829 host.go:66] Checking if "default-k8s-diff-port-327790" exists ...
	I1216 21:04:45.806153   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:45.806193   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:45.820545   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46261
	I1216 21:04:45.821062   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:45.821598   60829 main.go:141] libmachine: Using API Version  1
	I1216 21:04:45.821625   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:45.822086   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:45.822294   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetState
	I1216 21:04:45.823995   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 21:04:45.824775   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40091
	I1216 21:04:45.825269   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:45.825754   60829 main.go:141] libmachine: Using API Version  1
	I1216 21:04:45.825778   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:45.826117   60829 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1216 21:04:45.826158   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:45.826843   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:45.826892   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:45.827527   60829 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1216 21:04:45.827557   60829 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1216 21:04:45.827577   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 21:04:45.829352   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34899
	I1216 21:04:45.829769   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:45.830197   60829 main.go:141] libmachine: Using API Version  1
	I1216 21:04:45.830217   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:45.830543   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:45.830767   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetState
	I1216 21:04:45.831413   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 21:04:45.832010   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 21:04:45.832030   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 21:04:45.832202   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 21:04:45.832424   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 21:04:45.832496   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 21:04:45.832847   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 21:04:45.833056   60829 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 21:04:45.834475   60829 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 21:04:45.835944   60829 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 21:04:45.835965   60829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 21:04:45.835983   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 21:04:45.839118   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 21:04:45.839533   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 21:04:45.839560   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 21:04:45.839744   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 21:04:45.839947   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 21:04:45.840087   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 21:04:45.840218   60829 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 21:04:45.845365   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37995
	I1216 21:04:45.845925   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:45.847042   60829 main.go:141] libmachine: Using API Version  1
	I1216 21:04:45.847060   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:45.847450   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:45.847669   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetState
	I1216 21:04:45.849934   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 21:04:45.850165   60829 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 21:04:45.850182   60829 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 21:04:45.850199   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 21:04:45.853083   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 21:04:45.853493   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 21:04:45.853518   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 21:04:45.853679   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 21:04:45.853848   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 21:04:45.854024   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 21:04:45.854177   60829 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 21:04:45.978935   60829 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 21:04:46.010601   60829 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-327790" to be "Ready" ...
	I1216 21:04:46.019674   60829 node_ready.go:49] node "default-k8s-diff-port-327790" has status "Ready":"True"
	I1216 21:04:46.019704   60829 node_ready.go:38] duration metric: took 9.066576ms for node "default-k8s-diff-port-327790" to be "Ready" ...
	I1216 21:04:46.019715   60829 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:04:46.033957   60829 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:46.103779   60829 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1216 21:04:46.103812   60829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1216 21:04:46.120299   60829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 21:04:46.171131   60829 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1216 21:04:46.171171   60829 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1216 21:04:46.171280   60829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 21:04:46.244556   60829 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 21:04:46.244587   60829 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1216 21:04:46.332646   60829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 21:04:47.461793   60829 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.34145582s)
	I1216 21:04:47.461871   60829 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.129193295s)
	I1216 21:04:47.461793   60829 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.290486436s)
	I1216 21:04:47.461899   60829 main.go:141] libmachine: Making call to close driver server
	I1216 21:04:47.461913   60829 main.go:141] libmachine: Making call to close driver server
	I1216 21:04:47.461918   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Close
	I1216 21:04:47.461875   60829 main.go:141] libmachine: Making call to close driver server
	I1216 21:04:47.461982   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Close
	I1216 21:04:47.461927   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Close
	I1216 21:04:47.462463   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Closing plugin on server side
	I1216 21:04:47.462469   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Closing plugin on server side
	I1216 21:04:47.462480   60829 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:04:47.462488   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Closing plugin on server side
	I1216 21:04:47.462494   60829 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:04:47.462504   60829 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:04:47.462506   60829 main.go:141] libmachine: Making call to close driver server
	I1216 21:04:47.462511   60829 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:04:47.462516   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Close
	I1216 21:04:47.462521   60829 main.go:141] libmachine: Making call to close driver server
	I1216 21:04:47.462529   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Close
	I1216 21:04:47.462556   60829 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:04:47.462573   60829 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:04:47.462581   60829 main.go:141] libmachine: Making call to close driver server
	I1216 21:04:47.462588   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Close
	I1216 21:04:47.462805   60829 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:04:47.462816   60829 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:04:47.462816   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Closing plugin on server side
	I1216 21:04:47.462827   60829 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-327790"
	I1216 21:04:47.462841   60829 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:04:47.462848   60829 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:04:47.463049   60829 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:04:47.463067   60829 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:04:47.524466   60829 main.go:141] libmachine: Making call to close driver server
	I1216 21:04:47.524497   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Close
	I1216 21:04:47.524822   60829 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:04:47.524843   60829 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:04:47.524869   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Closing plugin on server side
	I1216 21:04:47.526679   60829 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I1216 21:04:45.861404   60421 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.911759863s)
	I1216 21:04:45.861483   60421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:04:45.889560   60421 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 21:04:45.922090   60421 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:04:45.945227   60421 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:04:45.945261   60421 kubeadm.go:157] found existing configuration files:
	
	I1216 21:04:45.945306   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 21:04:45.960594   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:04:45.960666   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:04:45.980613   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 21:04:46.005349   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:04:46.005431   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:04:46.021683   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 21:04:46.032967   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:04:46.033042   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:04:46.064718   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 21:04:46.078736   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:04:46.078805   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:04:46.092798   60421 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 21:04:46.293434   60421 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 21:04:45.430910   60215 pod_ready.go:82] duration metric: took 4m0.000948437s for pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace to be "Ready" ...
	E1216 21:04:45.430950   60215 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace to be "Ready" (will not retry!)
	I1216 21:04:45.430970   60215 pod_ready.go:39] duration metric: took 4m12.926677248s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:04:45.431002   60215 kubeadm.go:597] duration metric: took 4m20.847109652s to restartPrimaryControlPlane
	W1216 21:04:45.431059   60215 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 21:04:45.431092   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 21:04:47.527909   60829 addons.go:510] duration metric: took 1.747463467s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I1216 21:04:48.047956   60829 pod_ready.go:103] pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:51.781856   60933 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1216 21:04:51.782285   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:04:51.782543   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:04:54.704462   60421 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I1216 21:04:54.704514   60421 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 21:04:54.704600   60421 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 21:04:54.704736   60421 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 21:04:54.704839   60421 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 21:04:54.704894   60421 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 21:04:54.706650   60421 out.go:235]   - Generating certificates and keys ...
	I1216 21:04:54.706771   60421 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 21:04:54.706865   60421 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 21:04:54.706999   60421 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 21:04:54.707113   60421 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 21:04:54.707256   60421 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 21:04:54.707344   60421 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 21:04:54.707478   60421 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 21:04:54.707573   60421 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 21:04:54.707683   60421 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 21:04:54.707806   60421 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 21:04:54.707851   60421 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 21:04:54.707902   60421 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 21:04:54.707968   60421 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 21:04:54.708056   60421 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 21:04:54.708127   60421 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 21:04:54.708225   60421 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 21:04:54.708305   60421 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 21:04:54.708427   60421 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 21:04:54.708526   60421 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 21:04:54.710014   60421 out.go:235]   - Booting up control plane ...
	I1216 21:04:54.710113   60421 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 21:04:54.710197   60421 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 21:04:54.710254   60421 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 21:04:54.710361   60421 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 21:04:54.710457   60421 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 21:04:54.710511   60421 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 21:04:54.710670   60421 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 21:04:54.710792   60421 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 21:04:54.710852   60421 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.532878ms
	I1216 21:04:54.710912   60421 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1216 21:04:54.710982   60421 kubeadm.go:310] [api-check] The API server is healthy after 5.50189872s
	I1216 21:04:54.711125   60421 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 21:04:54.711281   60421 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 21:04:54.711362   60421 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 21:04:54.711618   60421 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-232338 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 21:04:54.711712   60421 kubeadm.go:310] [bootstrap-token] Using token: knn1cl.i9horbjuutctjfyf
	I1216 21:04:54.714363   60421 out.go:235]   - Configuring RBAC rules ...
	I1216 21:04:54.714488   60421 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 21:04:54.714560   60421 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 21:04:54.714674   60421 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 21:04:54.714820   60421 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 21:04:54.714914   60421 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 21:04:54.714981   60421 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 21:04:54.715083   60421 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 21:04:54.715159   60421 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1216 21:04:54.715228   60421 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1216 21:04:54.715237   60421 kubeadm.go:310] 
	I1216 21:04:54.715345   60421 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1216 21:04:54.715359   60421 kubeadm.go:310] 
	I1216 21:04:54.715455   60421 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1216 21:04:54.715463   60421 kubeadm.go:310] 
	I1216 21:04:54.715510   60421 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1216 21:04:54.715596   60421 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 21:04:54.715669   60421 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 21:04:54.715679   60421 kubeadm.go:310] 
	I1216 21:04:54.715767   60421 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1216 21:04:54.715775   60421 kubeadm.go:310] 
	I1216 21:04:54.715842   60421 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 21:04:54.715851   60421 kubeadm.go:310] 
	I1216 21:04:54.715908   60421 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1216 21:04:54.715969   60421 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 21:04:54.716026   60421 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 21:04:54.716032   60421 kubeadm.go:310] 
	I1216 21:04:54.716106   60421 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 21:04:54.716171   60421 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1216 21:04:54.716177   60421 kubeadm.go:310] 
	I1216 21:04:54.716240   60421 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token knn1cl.i9horbjuutctjfyf \
	I1216 21:04:54.716340   60421 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e03b60b144334bf383a3d22daeca854a6b4004373f1847ba3afcb85a998b5735 \
	I1216 21:04:54.716374   60421 kubeadm.go:310] 	--control-plane 
	I1216 21:04:54.716384   60421 kubeadm.go:310] 
	I1216 21:04:54.716457   60421 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1216 21:04:54.716467   60421 kubeadm.go:310] 
	I1216 21:04:54.716534   60421 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token knn1cl.i9horbjuutctjfyf \
	I1216 21:04:54.716634   60421 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e03b60b144334bf383a3d22daeca854a6b4004373f1847ba3afcb85a998b5735 
	I1216 21:04:54.716644   60421 cni.go:84] Creating CNI manager for ""
	I1216 21:04:54.716651   60421 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:04:54.718260   60421 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 21:04:50.542207   60829 pod_ready.go:103] pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:52.542453   60829 pod_ready.go:103] pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:55.040960   60829 pod_ready.go:103] pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:56.042145   60829 pod_ready.go:93] pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:04:56.042175   60829 pod_ready.go:82] duration metric: took 10.008191514s for pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:56.042192   60829 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:56.047996   60829 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:04:56.048022   60829 pod_ready.go:82] duration metric: took 5.821217ms for pod "kube-apiserver-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:56.048031   60829 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:56.052582   60829 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:04:56.052608   60829 pod_ready.go:82] duration metric: took 4.569092ms for pod "kube-controller-manager-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:56.052619   60829 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:56.056805   60829 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:04:56.056834   60829 pod_ready.go:82] duration metric: took 4.206726ms for pod "kube-scheduler-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:56.056841   60829 pod_ready.go:39] duration metric: took 10.037112061s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:04:56.056855   60829 api_server.go:52] waiting for apiserver process to appear ...
	I1216 21:04:56.056904   60829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:04:56.076993   60829 api_server.go:72] duration metric: took 10.296578804s to wait for apiserver process to appear ...
	I1216 21:04:56.077023   60829 api_server.go:88] waiting for apiserver healthz status ...
	I1216 21:04:56.077045   60829 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1216 21:04:56.082250   60829 api_server.go:279] https://192.168.39.162:8444/healthz returned 200:
	ok
	I1216 21:04:56.083348   60829 api_server.go:141] control plane version: v1.32.0
	I1216 21:04:56.083369   60829 api_server.go:131] duration metric: took 6.339438ms to wait for apiserver health ...
	I1216 21:04:56.083377   60829 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 21:04:56.090255   60829 system_pods.go:59] 9 kube-system pods found
	I1216 21:04:56.090290   60829 system_pods.go:61] "coredns-668d6bf9bc-2qcfx" [4ac98efa-96ff-4564-93de-4a61de7a6507] Running
	I1216 21:04:56.090302   60829 system_pods.go:61] "coredns-668d6bf9bc-fb7wx" [f2f2c0e7-893f-45ba-8da9-3b03f5560d89] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:04:56.090310   60829 system_pods.go:61] "etcd-default-k8s-diff-port-327790" [5363e160-ef78-4737-89f9-5f4d0f0eab95] Running
	I1216 21:04:56.090318   60829 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-327790" [b53c6be6-e476-4a5a-80c2-96e701736820] Running
	I1216 21:04:56.090324   60829 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-327790" [57d8747a-7258-48c3-9bcd-6fedaa8b7431] Running
	I1216 21:04:56.090329   60829 system_pods.go:61] "kube-proxy-njqp8" [e5f1789d-b343-4c2e-b078-4a15f4b18569] Running
	I1216 21:04:56.090334   60829 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-327790" [e2303bbd-b9d9-4392-867f-6f5f43f74826] Running
	I1216 21:04:56.090342   60829 system_pods.go:61] "metrics-server-f79f97bbb-84xtf" [569c6717-dc12-474f-8156-d2dd9e410a54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:04:56.090349   60829 system_pods.go:61] "storage-provisioner" [4e5b12f0-3d96-4dd0-81e7-300b82058d47] Running
	I1216 21:04:56.090360   60829 system_pods.go:74] duration metric: took 6.975795ms to wait for pod list to return data ...
	I1216 21:04:56.090373   60829 default_sa.go:34] waiting for default service account to be created ...
	I1216 21:04:56.093967   60829 default_sa.go:45] found service account: "default"
	I1216 21:04:56.093998   60829 default_sa.go:55] duration metric: took 3.616693ms for default service account to be created ...
	I1216 21:04:56.094010   60829 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 21:04:56.241532   60829 system_pods.go:86] 9 kube-system pods found
	I1216 21:04:56.241568   60829 system_pods.go:89] "coredns-668d6bf9bc-2qcfx" [4ac98efa-96ff-4564-93de-4a61de7a6507] Running
	I1216 21:04:56.241582   60829 system_pods.go:89] "coredns-668d6bf9bc-fb7wx" [f2f2c0e7-893f-45ba-8da9-3b03f5560d89] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:04:56.241589   60829 system_pods.go:89] "etcd-default-k8s-diff-port-327790" [5363e160-ef78-4737-89f9-5f4d0f0eab95] Running
	I1216 21:04:56.241597   60829 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-327790" [b53c6be6-e476-4a5a-80c2-96e701736820] Running
	I1216 21:04:56.241605   60829 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-327790" [57d8747a-7258-48c3-9bcd-6fedaa8b7431] Running
	I1216 21:04:56.241611   60829 system_pods.go:89] "kube-proxy-njqp8" [e5f1789d-b343-4c2e-b078-4a15f4b18569] Running
	I1216 21:04:56.241617   60829 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-327790" [e2303bbd-b9d9-4392-867f-6f5f43f74826] Running
	I1216 21:04:56.241624   60829 system_pods.go:89] "metrics-server-f79f97bbb-84xtf" [569c6717-dc12-474f-8156-d2dd9e410a54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:04:56.241629   60829 system_pods.go:89] "storage-provisioner" [4e5b12f0-3d96-4dd0-81e7-300b82058d47] Running
	I1216 21:04:56.241639   60829 system_pods.go:126] duration metric: took 147.621114ms to wait for k8s-apps to be running ...
	I1216 21:04:56.241656   60829 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 21:04:56.241730   60829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:04:56.258891   60829 system_svc.go:56] duration metric: took 17.227056ms WaitForService to wait for kubelet
	I1216 21:04:56.258935   60829 kubeadm.go:582] duration metric: took 10.478521255s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 21:04:56.258962   60829 node_conditions.go:102] verifying NodePressure condition ...
	I1216 21:04:56.438641   60829 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 21:04:56.438667   60829 node_conditions.go:123] node cpu capacity is 2
	I1216 21:04:56.438679   60829 node_conditions.go:105] duration metric: took 179.711624ms to run NodePressure ...
	I1216 21:04:56.438692   60829 start.go:241] waiting for startup goroutines ...
	I1216 21:04:56.438700   60829 start.go:246] waiting for cluster config update ...
	I1216 21:04:56.438714   60829 start.go:255] writing updated cluster config ...
	I1216 21:04:56.438975   60829 ssh_runner.go:195] Run: rm -f paused
	I1216 21:04:56.490195   60829 start.go:600] kubectl: 1.32.0, cluster: 1.32.0 (minor skew: 0)
	I1216 21:04:56.492395   60829 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-327790" cluster and "default" namespace by default
	I1216 21:04:54.719483   60421 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 21:04:54.732035   60421 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 21:04:54.754010   60421 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 21:04:54.754122   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:54.754177   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-232338 minikube.k8s.io/updated_at=2024_12_16T21_04_54_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=74e51ab701402ddc00f8ba70f2a2775c7dcd6477 minikube.k8s.io/name=no-preload-232338 minikube.k8s.io/primary=true
	I1216 21:04:54.773008   60421 ops.go:34] apiserver oom_adj: -16
	I1216 21:04:55.009573   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:55.510039   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:56.009645   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:56.509608   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:57.009714   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:57.509902   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:58.009901   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:58.509631   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:58.632896   60421 kubeadm.go:1113] duration metric: took 3.878846316s to wait for elevateKubeSystemPrivileges
	I1216 21:04:58.632933   60421 kubeadm.go:394] duration metric: took 5m23.15322559s to StartCluster
	I1216 21:04:58.632951   60421 settings.go:142] acquiring lock: {Name:mke62e1d1fa6bfae09410847a3fc6f95d0bbbd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:04:58.633031   60421 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 21:04:58.635409   60421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/kubeconfig: {Name:mk67073c6dc9abd712825d4490d6430745897f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:04:58.635720   60421 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.240 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 21:04:58.635835   60421 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 21:04:58.635944   60421 addons.go:69] Setting storage-provisioner=true in profile "no-preload-232338"
	I1216 21:04:58.635958   60421 config.go:182] Loaded profile config "no-preload-232338": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 21:04:58.635966   60421 addons.go:234] Setting addon storage-provisioner=true in "no-preload-232338"
	I1216 21:04:58.635969   60421 addons.go:69] Setting default-storageclass=true in profile "no-preload-232338"
	W1216 21:04:58.635975   60421 addons.go:243] addon storage-provisioner should already be in state true
	I1216 21:04:58.635986   60421 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-232338"
	I1216 21:04:58.636005   60421 host.go:66] Checking if "no-preload-232338" exists ...
	I1216 21:04:58.635997   60421 addons.go:69] Setting metrics-server=true in profile "no-preload-232338"
	I1216 21:04:58.636029   60421 addons.go:234] Setting addon metrics-server=true in "no-preload-232338"
	W1216 21:04:58.636038   60421 addons.go:243] addon metrics-server should already be in state true
	I1216 21:04:58.636069   60421 host.go:66] Checking if "no-preload-232338" exists ...
	I1216 21:04:58.636428   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:58.636460   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:58.636428   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:58.636513   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:58.636532   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:58.636549   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:58.637558   60421 out.go:177] * Verifying Kubernetes components...
	I1216 21:04:58.639254   60421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 21:04:58.652770   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43305
	I1216 21:04:58.652789   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35093
	I1216 21:04:58.653247   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:58.653368   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:58.653818   60421 main.go:141] libmachine: Using API Version  1
	I1216 21:04:58.653836   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:58.653944   60421 main.go:141] libmachine: Using API Version  1
	I1216 21:04:58.653963   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:58.654562   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:58.654565   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:58.654775   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetState
	I1216 21:04:58.655078   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:58.655117   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:58.656383   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38087
	I1216 21:04:58.656987   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:58.657520   60421 main.go:141] libmachine: Using API Version  1
	I1216 21:04:58.657553   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:58.657933   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:58.658517   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:58.658566   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:58.658692   60421 addons.go:234] Setting addon default-storageclass=true in "no-preload-232338"
	W1216 21:04:58.658708   60421 addons.go:243] addon default-storageclass should already be in state true
	I1216 21:04:58.658737   60421 host.go:66] Checking if "no-preload-232338" exists ...
	I1216 21:04:58.659001   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:58.659043   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:58.672942   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34153
	I1216 21:04:58.673478   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:58.674034   60421 main.go:141] libmachine: Using API Version  1
	I1216 21:04:58.674063   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:58.674421   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:58.674594   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37541
	I1216 21:04:58.674614   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetState
	I1216 21:04:58.674994   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:58.675686   60421 main.go:141] libmachine: Using API Version  1
	I1216 21:04:58.675699   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:58.676334   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:58.676480   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 21:04:58.676898   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:58.676931   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:58.679230   60421 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1216 21:04:58.680032   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33309
	I1216 21:04:58.680609   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:58.680754   60421 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1216 21:04:58.680772   60421 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1216 21:04:58.680794   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 21:04:58.681202   60421 main.go:141] libmachine: Using API Version  1
	I1216 21:04:58.681221   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:58.681610   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:58.681815   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetState
	I1216 21:04:58.683608   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 21:04:58.684266   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 21:04:58.684765   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 21:04:58.684793   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 21:04:58.684925   60421 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 21:04:56.783069   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:04:56.783323   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:04:58.684932   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 21:04:58.685156   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 21:04:58.685321   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 21:04:58.685515   60421 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 21:04:58.686360   60421 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 21:04:58.686379   60421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 21:04:58.686396   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 21:04:58.689909   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 21:04:58.690365   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 21:04:58.690392   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 21:04:58.690698   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 21:04:58.690927   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 21:04:58.691095   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 21:04:58.691305   60421 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 21:04:58.695899   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36017
	I1216 21:04:58.696274   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:58.696758   60421 main.go:141] libmachine: Using API Version  1
	I1216 21:04:58.696777   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:58.697064   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:58.697225   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetState
	I1216 21:04:58.698530   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 21:04:58.698751   60421 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 21:04:58.698766   60421 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 21:04:58.698784   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 21:04:58.701986   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 21:04:58.702420   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 21:04:58.702473   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 21:04:58.702655   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 21:04:58.702839   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 21:04:58.702979   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 21:04:58.703197   60421 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 21:04:58.866115   60421 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 21:04:58.892287   60421 node_ready.go:35] waiting up to 6m0s for node "no-preload-232338" to be "Ready" ...
	I1216 21:04:58.949580   60421 node_ready.go:49] node "no-preload-232338" has status "Ready":"True"
	I1216 21:04:58.949610   60421 node_ready.go:38] duration metric: took 57.274849ms for node "no-preload-232338" to be "Ready" ...
	I1216 21:04:58.949622   60421 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:04:58.983955   60421 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-4wwvd" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:59.036124   60421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 21:04:59.039113   60421 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1216 21:04:59.039139   60421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1216 21:04:59.087493   60421 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1216 21:04:59.087531   60421 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1216 21:04:59.094976   60421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 21:04:59.129816   60421 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 21:04:59.129840   60421 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1216 21:04:59.236390   60421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 21:05:00.157688   60421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.121522553s)
	I1216 21:05:00.157736   60421 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:00.157751   60421 main.go:141] libmachine: (no-preload-232338) Calling .Close
	I1216 21:05:00.157764   60421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.06274536s)
	I1216 21:05:00.157830   60421 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:00.157851   60421 main.go:141] libmachine: (no-preload-232338) Calling .Close
	I1216 21:05:00.158259   60421 main.go:141] libmachine: (no-preload-232338) DBG | Closing plugin on server side
	I1216 21:05:00.158270   60421 main.go:141] libmachine: (no-preload-232338) DBG | Closing plugin on server side
	I1216 21:05:00.158282   60421 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:00.158288   60421 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:00.158297   60421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:00.158314   60421 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:00.158327   60421 main.go:141] libmachine: (no-preload-232338) Calling .Close
	I1216 21:05:00.158319   60421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:00.158344   60421 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:00.158352   60421 main.go:141] libmachine: (no-preload-232338) Calling .Close
	I1216 21:05:00.158589   60421 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:00.158604   60421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:00.158624   60421 main.go:141] libmachine: (no-preload-232338) DBG | Closing plugin on server side
	I1216 21:05:00.158589   60421 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:00.158655   60421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:00.182819   60421 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:00.182844   60421 main.go:141] libmachine: (no-preload-232338) Calling .Close
	I1216 21:05:00.183229   60421 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:00.183271   60421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:00.679810   60421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.44337328s)
	I1216 21:05:00.679867   60421 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:00.679880   60421 main.go:141] libmachine: (no-preload-232338) Calling .Close
	I1216 21:05:00.680233   60421 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:00.680254   60421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:00.680266   60421 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:00.680274   60421 main.go:141] libmachine: (no-preload-232338) Calling .Close
	I1216 21:05:00.680612   60421 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:00.680632   60421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:00.680643   60421 addons.go:475] Verifying addon metrics-server=true in "no-preload-232338"
	I1216 21:05:00.680659   60421 main.go:141] libmachine: (no-preload-232338) DBG | Closing plugin on server side
	I1216 21:05:00.682400   60421 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1216 21:05:00.684226   60421 addons.go:510] duration metric: took 2.048395371s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1216 21:05:00.997599   60421 pod_ready.go:103] pod "coredns-668d6bf9bc-4wwvd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:05:01.990706   60421 pod_ready.go:93] pod "coredns-668d6bf9bc-4wwvd" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:01.990733   60421 pod_ready.go:82] duration metric: took 3.006750411s for pod "coredns-668d6bf9bc-4wwvd" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:01.990742   60421 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-c4qfj" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:03.998055   60421 pod_ready.go:103] pod "coredns-668d6bf9bc-c4qfj" in "kube-system" namespace has status "Ready":"False"
	I1216 21:05:05.997310   60421 pod_ready.go:93] pod "coredns-668d6bf9bc-c4qfj" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:05.997334   60421 pod_ready.go:82] duration metric: took 4.006586503s for pod "coredns-668d6bf9bc-c4qfj" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:05.997346   60421 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.002576   60421 pod_ready.go:93] pod "etcd-no-preload-232338" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:06.002597   60421 pod_ready.go:82] duration metric: took 5.244238ms for pod "etcd-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.002607   60421 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.007407   60421 pod_ready.go:93] pod "kube-apiserver-no-preload-232338" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:06.007435   60421 pod_ready.go:82] duration metric: took 4.820838ms for pod "kube-apiserver-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.007449   60421 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.012239   60421 pod_ready.go:93] pod "kube-controller-manager-no-preload-232338" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:06.012263   60421 pod_ready.go:82] duration metric: took 4.806874ms for pod "kube-controller-manager-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.012273   60421 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-m5hq8" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.017087   60421 pod_ready.go:93] pod "kube-proxy-m5hq8" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:06.017111   60421 pod_ready.go:82] duration metric: took 4.830348ms for pod "kube-proxy-m5hq8" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.017124   60421 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.393947   60421 pod_ready.go:93] pod "kube-scheduler-no-preload-232338" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:06.393978   60421 pod_ready.go:82] duration metric: took 376.845934ms for pod "kube-scheduler-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.393989   60421 pod_ready.go:39] duration metric: took 7.444356073s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:05:06.394008   60421 api_server.go:52] waiting for apiserver process to appear ...
	I1216 21:05:06.394074   60421 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:05:06.410287   60421 api_server.go:72] duration metric: took 7.774519412s to wait for apiserver process to appear ...
	I1216 21:05:06.410327   60421 api_server.go:88] waiting for apiserver healthz status ...
	I1216 21:05:06.410363   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:05:06.415344   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 200:
	ok
	I1216 21:05:06.416302   60421 api_server.go:141] control plane version: v1.32.0
	I1216 21:05:06.416324   60421 api_server.go:131] duration metric: took 5.989768ms to wait for apiserver health ...
	I1216 21:05:06.416333   60421 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 21:05:06.598174   60421 system_pods.go:59] 9 kube-system pods found
	I1216 21:05:06.598205   60421 system_pods.go:61] "coredns-668d6bf9bc-4wwvd" [1c63ab10-dfdd-4aca-b39f-bc9b0e028e5e] Running
	I1216 21:05:06.598210   60421 system_pods.go:61] "coredns-668d6bf9bc-c4qfj" [b9bf3125-1e6d-4794-a2e6-2ff7ed5132b1] Running
	I1216 21:05:06.598214   60421 system_pods.go:61] "etcd-no-preload-232338" [5318f756-4c64-46be-b71b-94d53f48f0e9] Running
	I1216 21:05:06.598218   60421 system_pods.go:61] "kube-apiserver-no-preload-232338" [8d8fa68c-80ab-4747-a2ce-eeaff8847c29] Running
	I1216 21:05:06.598222   60421 system_pods.go:61] "kube-controller-manager-no-preload-232338" [8626806c-cd3f-488c-95c3-4b909878c1e4] Running
	I1216 21:05:06.598224   60421 system_pods.go:61] "kube-proxy-m5hq8" [ca0d357a-dda2-4508-a954-5c67eaf5b8ac] Running
	I1216 21:05:06.598229   60421 system_pods.go:61] "kube-scheduler-no-preload-232338" [8944107e-9e5c-474b-a0c1-9461e797a131] Running
	I1216 21:05:06.598236   60421 system_pods.go:61] "metrics-server-f79f97bbb-l7dcr" [fabafb40-1cb8-427b-88a6-37eeb6fd5b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:05:06.598240   60421 system_pods.go:61] "storage-provisioner" [3b742666-dfd4-4c9b-95a9-25367ec2a718] Running
	I1216 21:05:06.598248   60421 system_pods.go:74] duration metric: took 181.908567ms to wait for pod list to return data ...
	I1216 21:05:06.598255   60421 default_sa.go:34] waiting for default service account to be created ...
	I1216 21:05:06.794774   60421 default_sa.go:45] found service account: "default"
	I1216 21:05:06.794805   60421 default_sa.go:55] duration metric: took 196.542698ms for default service account to be created ...
	I1216 21:05:06.794823   60421 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 21:05:06.998297   60421 system_pods.go:86] 9 kube-system pods found
	I1216 21:05:06.998332   60421 system_pods.go:89] "coredns-668d6bf9bc-4wwvd" [1c63ab10-dfdd-4aca-b39f-bc9b0e028e5e] Running
	I1216 21:05:06.998341   60421 system_pods.go:89] "coredns-668d6bf9bc-c4qfj" [b9bf3125-1e6d-4794-a2e6-2ff7ed5132b1] Running
	I1216 21:05:06.998348   60421 system_pods.go:89] "etcd-no-preload-232338" [5318f756-4c64-46be-b71b-94d53f48f0e9] Running
	I1216 21:05:06.998354   60421 system_pods.go:89] "kube-apiserver-no-preload-232338" [8d8fa68c-80ab-4747-a2ce-eeaff8847c29] Running
	I1216 21:05:06.998359   60421 system_pods.go:89] "kube-controller-manager-no-preload-232338" [8626806c-cd3f-488c-95c3-4b909878c1e4] Running
	I1216 21:05:06.998364   60421 system_pods.go:89] "kube-proxy-m5hq8" [ca0d357a-dda2-4508-a954-5c67eaf5b8ac] Running
	I1216 21:05:06.998369   60421 system_pods.go:89] "kube-scheduler-no-preload-232338" [8944107e-9e5c-474b-a0c1-9461e797a131] Running
	I1216 21:05:06.998378   60421 system_pods.go:89] "metrics-server-f79f97bbb-l7dcr" [fabafb40-1cb8-427b-88a6-37eeb6fd5b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:05:06.998385   60421 system_pods.go:89] "storage-provisioner" [3b742666-dfd4-4c9b-95a9-25367ec2a718] Running
	I1216 21:05:06.998397   60421 system_pods.go:126] duration metric: took 203.564807ms to wait for k8s-apps to be running ...
	I1216 21:05:06.998407   60421 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 21:05:06.998457   60421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:05:07.014979   60421 system_svc.go:56] duration metric: took 16.561363ms WaitForService to wait for kubelet
	I1216 21:05:07.015013   60421 kubeadm.go:582] duration metric: took 8.379260538s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 21:05:07.015029   60421 node_conditions.go:102] verifying NodePressure condition ...
	I1216 21:05:07.195470   60421 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 21:05:07.195504   60421 node_conditions.go:123] node cpu capacity is 2
	I1216 21:05:07.195516   60421 node_conditions.go:105] duration metric: took 180.480949ms to run NodePressure ...
	I1216 21:05:07.195530   60421 start.go:241] waiting for startup goroutines ...
	I1216 21:05:07.195541   60421 start.go:246] waiting for cluster config update ...
	I1216 21:05:07.195554   60421 start.go:255] writing updated cluster config ...
	I1216 21:05:07.195857   60421 ssh_runner.go:195] Run: rm -f paused
	I1216 21:05:07.244442   60421 start.go:600] kubectl: 1.32.0, cluster: 1.32.0 (minor skew: 0)
	I1216 21:05:07.246788   60421 out.go:177] * Done! kubectl is now configured to use "no-preload-232338" cluster and "default" namespace by default
	I1216 21:05:06.784032   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:05:06.784224   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:05:13.066274   60215 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.635155592s)
	I1216 21:05:13.066379   60215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:05:13.096145   60215 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 21:05:13.109211   60215 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:05:13.125828   60215 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:05:13.125859   60215 kubeadm.go:157] found existing configuration files:
	
	I1216 21:05:13.125914   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 21:05:13.146982   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:05:13.147053   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:05:13.159382   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 21:05:13.176492   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:05:13.176572   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:05:13.190933   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 21:05:13.213230   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:05:13.213301   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:05:13.224631   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 21:05:13.234914   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:05:13.234975   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:05:13.245513   60215 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 21:05:13.300399   60215 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I1216 21:05:13.300491   60215 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 21:05:13.424114   60215 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 21:05:13.424252   60215 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 21:05:13.424372   60215 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 21:05:13.434507   60215 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 21:05:13.436710   60215 out.go:235]   - Generating certificates and keys ...
	I1216 21:05:13.436825   60215 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 21:05:13.436985   60215 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 21:05:13.437127   60215 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 21:05:13.437215   60215 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 21:05:13.437317   60215 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 21:05:13.437404   60215 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 21:05:13.437822   60215 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 21:05:13.438183   60215 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 21:05:13.438724   60215 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 21:05:13.439096   60215 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 21:05:13.439334   60215 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 21:05:13.439399   60215 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 21:05:13.528853   60215 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 21:05:13.700795   60215 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 21:05:13.890142   60215 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 21:05:14.166151   60215 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 21:05:14.310513   60215 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 21:05:14.311121   60215 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 21:05:14.317114   60215 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 21:05:14.319080   60215 out.go:235]   - Booting up control plane ...
	I1216 21:05:14.319218   60215 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 21:05:14.319332   60215 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 21:05:14.319518   60215 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 21:05:14.340394   60215 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 21:05:14.348443   60215 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 21:05:14.348533   60215 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 21:05:14.493244   60215 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 21:05:14.493456   60215 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 21:05:14.995210   60215 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.042805ms
	I1216 21:05:14.995325   60215 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1216 21:05:20.000911   60215 kubeadm.go:310] [api-check] The API server is healthy after 5.002773967s
	I1216 21:05:20.019851   60215 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 21:05:20.037375   60215 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 21:05:20.074003   60215 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 21:05:20.074237   60215 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-606219 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 21:05:20.087136   60215 kubeadm.go:310] [bootstrap-token] Using token: wev02f.lvhctqt9pq1agi1c
	I1216 21:05:20.088742   60215 out.go:235]   - Configuring RBAC rules ...
	I1216 21:05:20.088893   60215 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 21:05:20.094114   60215 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 21:05:20.101979   60215 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 21:05:20.105419   60215 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 21:05:20.112443   60215 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 21:05:20.116045   60215 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 21:05:20.406790   60215 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 21:05:20.844101   60215 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1216 21:05:21.414298   60215 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1216 21:05:21.414397   60215 kubeadm.go:310] 
	I1216 21:05:21.414488   60215 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1216 21:05:21.414504   60215 kubeadm.go:310] 
	I1216 21:05:21.414636   60215 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1216 21:05:21.414655   60215 kubeadm.go:310] 
	I1216 21:05:21.414694   60215 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1216 21:05:21.414796   60215 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 21:05:21.414866   60215 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 21:05:21.414877   60215 kubeadm.go:310] 
	I1216 21:05:21.414978   60215 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1216 21:05:21.415004   60215 kubeadm.go:310] 
	I1216 21:05:21.415071   60215 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 21:05:21.415080   60215 kubeadm.go:310] 
	I1216 21:05:21.415147   60215 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1216 21:05:21.415314   60215 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 21:05:21.415424   60215 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 21:05:21.415444   60215 kubeadm.go:310] 
	I1216 21:05:21.415568   60215 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 21:05:21.415674   60215 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1216 21:05:21.415690   60215 kubeadm.go:310] 
	I1216 21:05:21.415837   60215 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wev02f.lvhctqt9pq1agi1c \
	I1216 21:05:21.415982   60215 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e03b60b144334bf383a3d22daeca854a6b4004373f1847ba3afcb85a998b5735 \
	I1216 21:05:21.416023   60215 kubeadm.go:310] 	--control-plane 
	I1216 21:05:21.416033   60215 kubeadm.go:310] 
	I1216 21:05:21.416152   60215 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1216 21:05:21.416165   60215 kubeadm.go:310] 
	I1216 21:05:21.416295   60215 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wev02f.lvhctqt9pq1agi1c \
	I1216 21:05:21.416452   60215 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e03b60b144334bf383a3d22daeca854a6b4004373f1847ba3afcb85a998b5735 
	I1216 21:05:21.417157   60215 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 21:05:21.417251   60215 cni.go:84] Creating CNI manager for ""
	I1216 21:05:21.417265   60215 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:05:21.418899   60215 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 21:05:21.420240   60215 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 21:05:21.438639   60215 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 21:05:21.470443   60215 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 21:05:21.470525   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:21.470552   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-606219 minikube.k8s.io/updated_at=2024_12_16T21_05_21_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=74e51ab701402ddc00f8ba70f2a2775c7dcd6477 minikube.k8s.io/name=embed-certs-606219 minikube.k8s.io/primary=true
	I1216 21:05:21.721162   60215 ops.go:34] apiserver oom_adj: -16
	I1216 21:05:21.721292   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:22.221634   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:22.722431   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:23.221436   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:23.721948   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:24.222009   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:24.722203   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:24.835684   60215 kubeadm.go:1113] duration metric: took 3.36522517s to wait for elevateKubeSystemPrivileges
	I1216 21:05:24.835729   60215 kubeadm.go:394] duration metric: took 5m0.316036708s to StartCluster
	I1216 21:05:24.835751   60215 settings.go:142] acquiring lock: {Name:mke62e1d1fa6bfae09410847a3fc6f95d0bbbd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:05:24.835847   60215 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 21:05:24.838279   60215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/kubeconfig: {Name:mk67073c6dc9abd712825d4490d6430745897f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:05:24.838580   60215 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.151 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 21:05:24.838625   60215 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 21:05:24.838747   60215 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-606219"
	I1216 21:05:24.838768   60215 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-606219"
	W1216 21:05:24.838789   60215 addons.go:243] addon storage-provisioner should already be in state true
	I1216 21:05:24.838816   60215 config.go:182] Loaded profile config "embed-certs-606219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 21:05:24.838825   60215 addons.go:69] Setting default-storageclass=true in profile "embed-certs-606219"
	I1216 21:05:24.838832   60215 addons.go:69] Setting metrics-server=true in profile "embed-certs-606219"
	I1216 21:05:24.838846   60215 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-606219"
	I1216 21:05:24.838822   60215 host.go:66] Checking if "embed-certs-606219" exists ...
	I1216 21:05:24.838848   60215 addons.go:234] Setting addon metrics-server=true in "embed-certs-606219"
	W1216 21:05:24.838945   60215 addons.go:243] addon metrics-server should already be in state true
	I1216 21:05:24.838965   60215 host.go:66] Checking if "embed-certs-606219" exists ...
	I1216 21:05:24.839285   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:05:24.839292   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:05:24.839331   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:05:24.839364   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:05:24.839415   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:05:24.839496   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:05:24.843833   60215 out.go:177] * Verifying Kubernetes components...
	I1216 21:05:24.845341   60215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 21:05:24.857648   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39513
	I1216 21:05:24.858457   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:05:24.859021   60215 main.go:141] libmachine: Using API Version  1
	I1216 21:05:24.859037   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:05:24.861356   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36663
	I1216 21:05:24.861406   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44685
	I1216 21:05:24.861357   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:05:24.861844   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:05:24.862150   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:05:24.862188   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:05:24.862315   60215 main.go:141] libmachine: Using API Version  1
	I1216 21:05:24.862334   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:05:24.862334   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:05:24.862661   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:05:24.862876   60215 main.go:141] libmachine: Using API Version  1
	I1216 21:05:24.862894   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:05:24.863171   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:05:24.863200   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:05:24.863634   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:05:24.863964   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetState
	I1216 21:05:24.867371   60215 addons.go:234] Setting addon default-storageclass=true in "embed-certs-606219"
	W1216 21:05:24.867392   60215 addons.go:243] addon default-storageclass should already be in state true
	I1216 21:05:24.867419   60215 host.go:66] Checking if "embed-certs-606219" exists ...
	I1216 21:05:24.867758   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:05:24.867801   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:05:24.884243   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35999
	I1216 21:05:24.884680   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:05:24.885282   60215 main.go:141] libmachine: Using API Version  1
	I1216 21:05:24.885304   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:05:24.885380   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36799
	I1216 21:05:24.885657   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:05:24.885730   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:05:24.885934   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetState
	I1216 21:05:24.886191   60215 main.go:141] libmachine: Using API Version  1
	I1216 21:05:24.886202   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:05:24.886473   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:05:24.886831   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:05:24.886853   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:05:24.887935   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:05:24.890092   60215 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1216 21:05:24.891395   60215 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1216 21:05:24.891413   60215 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1216 21:05:24.891441   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:05:24.894367   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46739
	I1216 21:05:24.894926   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:05:24.895551   60215 main.go:141] libmachine: Using API Version  1
	I1216 21:05:24.895570   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:05:24.895832   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:05:24.896148   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:05:24.896382   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetState
	I1216 21:05:24.896501   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:05:24.896523   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:05:24.897136   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:05:24.897327   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:05:24.897507   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:05:24.897673   60215 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 21:05:24.898101   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:05:24.900061   60215 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 21:05:24.901390   60215 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 21:05:24.901412   60215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 21:05:24.901432   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:05:24.904063   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:05:24.904403   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:05:24.904421   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:05:24.904617   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:05:24.904828   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:05:24.904969   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:05:24.905117   60215 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 21:05:24.907518   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32915
	I1216 21:05:24.907890   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:05:24.908349   60215 main.go:141] libmachine: Using API Version  1
	I1216 21:05:24.908362   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:05:24.908615   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:05:24.908793   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetState
	I1216 21:05:24.910349   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:05:24.910557   60215 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 21:05:24.910590   60215 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 21:05:24.910623   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:05:24.913163   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:05:24.913546   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:05:24.913628   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:05:24.913971   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:05:24.914156   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:05:24.914402   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:05:24.914562   60215 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 21:05:25.054773   60215 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 21:05:25.077692   60215 node_ready.go:35] waiting up to 6m0s for node "embed-certs-606219" to be "Ready" ...
	I1216 21:05:25.085592   60215 node_ready.go:49] node "embed-certs-606219" has status "Ready":"True"
	I1216 21:05:25.085618   60215 node_ready.go:38] duration metric: took 7.893359ms for node "embed-certs-606219" to be "Ready" ...
	I1216 21:05:25.085630   60215 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:05:25.092073   60215 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:25.160890   60215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 21:05:25.171950   60215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 21:05:25.174517   60215 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1216 21:05:25.174540   60215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1216 21:05:25.201386   60215 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1216 21:05:25.201415   60215 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1216 21:05:25.279568   60215 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 21:05:25.279599   60215 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1216 21:05:25.316528   60215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 21:05:25.944495   60215 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:25.944521   60215 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:25.944529   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Close
	I1216 21:05:25.944533   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Close
	I1216 21:05:25.944816   60215 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:25.944835   60215 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:25.944845   60215 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:25.944855   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Close
	I1216 21:05:25.944855   60215 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:25.944869   60215 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:25.944876   60215 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:25.944888   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Close
	I1216 21:05:25.944817   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Closing plugin on server side
	I1216 21:05:25.945069   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Closing plugin on server side
	I1216 21:05:25.945131   60215 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:25.945147   60215 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:25.945168   60215 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:25.945173   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Closing plugin on server side
	I1216 21:05:25.945218   60215 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:25.961427   60215 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:25.961449   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Close
	I1216 21:05:25.961729   60215 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:25.961743   60215 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:26.745600   60215 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.429029698s)
	I1216 21:05:26.745665   60215 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:26.745678   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Close
	I1216 21:05:26.746097   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Closing plugin on server side
	I1216 21:05:26.746115   60215 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:26.746128   60215 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:26.746142   60215 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:26.746151   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Close
	I1216 21:05:26.746429   60215 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:26.746446   60215 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:26.746457   60215 addons.go:475] Verifying addon metrics-server=true in "embed-certs-606219"
	I1216 21:05:26.746480   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Closing plugin on server side
	I1216 21:05:26.748859   60215 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1216 21:05:26.785021   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:05:26.785309   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:05:26.750502   60215 addons.go:510] duration metric: took 1.911885721s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1216 21:05:27.124629   60215 pod_ready.go:103] pod "etcd-embed-certs-606219" in "kube-system" namespace has status "Ready":"False"
	I1216 21:05:28.100607   60215 pod_ready.go:93] pod "etcd-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:28.100642   60215 pod_ready.go:82] duration metric: took 3.008540123s for pod "etcd-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:28.100654   60215 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:28.107620   60215 pod_ready.go:93] pod "kube-apiserver-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:28.107649   60215 pod_ready.go:82] duration metric: took 6.986126ms for pod "kube-apiserver-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:28.107661   60215 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:30.114012   60215 pod_ready.go:103] pod "kube-controller-manager-embed-certs-606219" in "kube-system" namespace has status "Ready":"False"
	I1216 21:05:31.116704   60215 pod_ready.go:93] pod "kube-controller-manager-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:31.116738   60215 pod_ready.go:82] duration metric: took 3.009069732s for pod "kube-controller-manager-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:31.116752   60215 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:31.122043   60215 pod_ready.go:93] pod "kube-scheduler-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:31.122079   60215 pod_ready.go:82] duration metric: took 5.318248ms for pod "kube-scheduler-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:31.122089   60215 pod_ready.go:39] duration metric: took 6.036446164s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:05:31.122107   60215 api_server.go:52] waiting for apiserver process to appear ...
	I1216 21:05:31.122167   60215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:05:31.140854   60215 api_server.go:72] duration metric: took 6.302233923s to wait for apiserver process to appear ...
	I1216 21:05:31.140887   60215 api_server.go:88] waiting for apiserver healthz status ...
	I1216 21:05:31.140910   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:05:31.146080   60215 api_server.go:279] https://192.168.61.151:8443/healthz returned 200:
	ok
	I1216 21:05:31.147076   60215 api_server.go:141] control plane version: v1.32.0
	I1216 21:05:31.147107   60215 api_server.go:131] duration metric: took 6.2056ms to wait for apiserver health ...
	I1216 21:05:31.147115   60215 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 21:05:31.152598   60215 system_pods.go:59] 9 kube-system pods found
	I1216 21:05:31.152627   60215 system_pods.go:61] "coredns-668d6bf9bc-5c74p" [ef8e73b6-150f-47cc-9df9-dcf983e5bd6e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:05:31.152634   60215 system_pods.go:61] "coredns-668d6bf9bc-xhdlz" [c1b5b585-f005-4885-9809-60f60e03bf04] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:05:31.152640   60215 system_pods.go:61] "etcd-embed-certs-606219" [f5595ee4-23f3-4227-8e25-8679fd2dc722] Running
	I1216 21:05:31.152643   60215 system_pods.go:61] "kube-apiserver-embed-certs-606219" [be11ba17-ecee-47c1-a4bd-329e0e705369] Running
	I1216 21:05:31.152647   60215 system_pods.go:61] "kube-controller-manager-embed-certs-606219" [21210597-d4d5-4cab-9a24-2d9f702f682d] Running
	I1216 21:05:31.152652   60215 system_pods.go:61] "kube-proxy-677x9" [37810520-4f02-46c4-8eeb-6dc70c859e3e] Running
	I1216 21:05:31.152655   60215 system_pods.go:61] "kube-scheduler-embed-certs-606219" [5a39f42d-b727-4acd-bd39-ae1c56a5b725] Running
	I1216 21:05:31.152659   60215 system_pods.go:61] "metrics-server-f79f97bbb-6fxnl" [828f2925-402c-4f49-89e1-354e082c0de4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:05:31.152662   60215 system_pods.go:61] "storage-provisioner" [6437bd61-690b-498d-b35c-e2ef4eb5be97] Running
	I1216 21:05:31.152669   60215 system_pods.go:74] duration metric: took 5.548798ms to wait for pod list to return data ...
	I1216 21:05:31.152675   60215 default_sa.go:34] waiting for default service account to be created ...
	I1216 21:05:31.155444   60215 default_sa.go:45] found service account: "default"
	I1216 21:05:31.155469   60215 default_sa.go:55] duration metric: took 2.788897ms for default service account to be created ...
	I1216 21:05:31.155477   60215 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 21:05:31.160520   60215 system_pods.go:86] 9 kube-system pods found
	I1216 21:05:31.160548   60215 system_pods.go:89] "coredns-668d6bf9bc-5c74p" [ef8e73b6-150f-47cc-9df9-dcf983e5bd6e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:05:31.160555   60215 system_pods.go:89] "coredns-668d6bf9bc-xhdlz" [c1b5b585-f005-4885-9809-60f60e03bf04] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:05:31.160561   60215 system_pods.go:89] "etcd-embed-certs-606219" [f5595ee4-23f3-4227-8e25-8679fd2dc722] Running
	I1216 21:05:31.160565   60215 system_pods.go:89] "kube-apiserver-embed-certs-606219" [be11ba17-ecee-47c1-a4bd-329e0e705369] Running
	I1216 21:05:31.160569   60215 system_pods.go:89] "kube-controller-manager-embed-certs-606219" [21210597-d4d5-4cab-9a24-2d9f702f682d] Running
	I1216 21:05:31.160573   60215 system_pods.go:89] "kube-proxy-677x9" [37810520-4f02-46c4-8eeb-6dc70c859e3e] Running
	I1216 21:05:31.160576   60215 system_pods.go:89] "kube-scheduler-embed-certs-606219" [5a39f42d-b727-4acd-bd39-ae1c56a5b725] Running
	I1216 21:05:31.160580   60215 system_pods.go:89] "metrics-server-f79f97bbb-6fxnl" [828f2925-402c-4f49-89e1-354e082c0de4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:05:31.160584   60215 system_pods.go:89] "storage-provisioner" [6437bd61-690b-498d-b35c-e2ef4eb5be97] Running
	I1216 21:05:31.160591   60215 system_pods.go:126] duration metric: took 5.109359ms to wait for k8s-apps to be running ...
	I1216 21:05:31.160597   60215 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 21:05:31.160637   60215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:05:31.177182   60215 system_svc.go:56] duration metric: took 16.575484ms WaitForService to wait for kubelet
	I1216 21:05:31.177216   60215 kubeadm.go:582] duration metric: took 6.33860089s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 21:05:31.177239   60215 node_conditions.go:102] verifying NodePressure condition ...
	I1216 21:05:31.180614   60215 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 21:05:31.180635   60215 node_conditions.go:123] node cpu capacity is 2
	I1216 21:05:31.180645   60215 node_conditions.go:105] duration metric: took 3.400617ms to run NodePressure ...
	I1216 21:05:31.180656   60215 start.go:241] waiting for startup goroutines ...
	I1216 21:05:31.180667   60215 start.go:246] waiting for cluster config update ...
	I1216 21:05:31.180684   60215 start.go:255] writing updated cluster config ...
	I1216 21:05:31.180960   60215 ssh_runner.go:195] Run: rm -f paused
	I1216 21:05:31.232404   60215 start.go:600] kubectl: 1.32.0, cluster: 1.32.0 (minor skew: 0)
	I1216 21:05:31.234366   60215 out.go:177] * Done! kubectl is now configured to use "embed-certs-606219" cluster and "default" namespace by default
	I1216 21:06:06.787417   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:06:06.787673   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:06:06.787700   60933 kubeadm.go:310] 
	I1216 21:06:06.787779   60933 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1216 21:06:06.787849   60933 kubeadm.go:310] 		timed out waiting for the condition
	I1216 21:06:06.787864   60933 kubeadm.go:310] 
	I1216 21:06:06.787894   60933 kubeadm.go:310] 	This error is likely caused by:
	I1216 21:06:06.787944   60933 kubeadm.go:310] 		- The kubelet is not running
	I1216 21:06:06.788115   60933 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 21:06:06.788131   60933 kubeadm.go:310] 
	I1216 21:06:06.788238   60933 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 21:06:06.788270   60933 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1216 21:06:06.788328   60933 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1216 21:06:06.788346   60933 kubeadm.go:310] 
	I1216 21:06:06.788492   60933 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1216 21:06:06.788568   60933 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1216 21:06:06.788575   60933 kubeadm.go:310] 
	I1216 21:06:06.788706   60933 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1216 21:06:06.788914   60933 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1216 21:06:06.789052   60933 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1216 21:06:06.789150   60933 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1216 21:06:06.789160   60933 kubeadm.go:310] 
	I1216 21:06:06.789970   60933 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 21:06:06.790084   60933 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1216 21:06:06.790222   60933 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1216 21:06:06.790376   60933 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 21:06:06.790430   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 21:06:07.272336   60933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:06:07.288881   60933 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:06:07.303411   60933 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:06:07.303437   60933 kubeadm.go:157] found existing configuration files:
	
	I1216 21:06:07.303486   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 21:06:07.314605   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:06:07.314675   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:06:07.326523   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 21:06:07.336506   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:06:07.336587   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:06:07.347505   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 21:06:07.357743   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:06:07.357799   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:06:07.368251   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 21:06:07.378296   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:06:07.378366   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:06:07.390625   60933 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 21:06:07.461800   60933 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1216 21:06:07.461911   60933 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 21:06:07.607467   60933 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 21:06:07.607664   60933 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 21:06:07.607821   60933 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1216 21:06:07.821429   60933 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 21:06:07.823617   60933 out.go:235]   - Generating certificates and keys ...
	I1216 21:06:07.823728   60933 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 21:06:07.823826   60933 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 21:06:07.823970   60933 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 21:06:07.824066   60933 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 21:06:07.824191   60933 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 21:06:07.824281   60933 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 21:06:07.824374   60933 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 21:06:07.824452   60933 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 21:06:07.824529   60933 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 21:06:07.824634   60933 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 21:06:07.824728   60933 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 21:06:07.824826   60933 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 21:06:08.070481   60933 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 21:06:08.416182   60933 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 21:06:08.472848   60933 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 21:06:08.528700   60933 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 21:06:08.551528   60933 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 21:06:08.552215   60933 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 21:06:08.552299   60933 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 21:06:08.702187   60933 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 21:06:08.704170   60933 out.go:235]   - Booting up control plane ...
	I1216 21:06:08.704286   60933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 21:06:08.721205   60933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 21:06:08.722619   60933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 21:06:08.724289   60933 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 21:06:08.726457   60933 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1216 21:06:48.729045   60933 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1216 21:06:48.729713   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:06:48.730028   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:06:53.730648   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:06:53.730870   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:07:03.731670   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:07:03.731904   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:07:23.733276   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:07:23.733489   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:08:03.734439   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:08:03.734730   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:08:03.734768   60933 kubeadm.go:310] 
	I1216 21:08:03.734831   60933 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1216 21:08:03.734902   60933 kubeadm.go:310] 		timed out waiting for the condition
	I1216 21:08:03.734917   60933 kubeadm.go:310] 
	I1216 21:08:03.734966   60933 kubeadm.go:310] 	This error is likely caused by:
	I1216 21:08:03.735003   60933 kubeadm.go:310] 		- The kubelet is not running
	I1216 21:08:03.735094   60933 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 21:08:03.735104   60933 kubeadm.go:310] 
	I1216 21:08:03.735260   60933 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 21:08:03.735325   60933 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1216 21:08:03.735353   60933 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1216 21:08:03.735359   60933 kubeadm.go:310] 
	I1216 21:08:03.735486   60933 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1216 21:08:03.735604   60933 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1216 21:08:03.735614   60933 kubeadm.go:310] 
	I1216 21:08:03.735757   60933 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1216 21:08:03.735880   60933 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1216 21:08:03.735986   60933 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1216 21:08:03.736096   60933 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1216 21:08:03.736107   60933 kubeadm.go:310] 
	I1216 21:08:03.736944   60933 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 21:08:03.737145   60933 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1216 21:08:03.737211   60933 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1216 21:08:03.737287   60933 kubeadm.go:394] duration metric: took 7m57.891196073s to StartCluster
	I1216 21:08:03.737346   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:08:03.737417   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:08:03.789377   60933 cri.go:89] found id: ""
	I1216 21:08:03.789412   60933 logs.go:282] 0 containers: []
	W1216 21:08:03.789421   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:08:03.789426   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:08:03.789477   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:08:03.831122   60933 cri.go:89] found id: ""
	I1216 21:08:03.831150   60933 logs.go:282] 0 containers: []
	W1216 21:08:03.831161   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:08:03.831167   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:08:03.831236   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:08:03.870598   60933 cri.go:89] found id: ""
	I1216 21:08:03.870625   60933 logs.go:282] 0 containers: []
	W1216 21:08:03.870634   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:08:03.870640   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:08:03.870695   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:08:03.909060   60933 cri.go:89] found id: ""
	I1216 21:08:03.909095   60933 logs.go:282] 0 containers: []
	W1216 21:08:03.909103   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:08:03.909109   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:08:03.909163   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:08:03.946925   60933 cri.go:89] found id: ""
	I1216 21:08:03.946954   60933 logs.go:282] 0 containers: []
	W1216 21:08:03.946962   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:08:03.946968   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:08:03.947038   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:08:03.985596   60933 cri.go:89] found id: ""
	I1216 21:08:03.985629   60933 logs.go:282] 0 containers: []
	W1216 21:08:03.985650   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:08:03.985670   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:08:03.985736   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:08:04.022504   60933 cri.go:89] found id: ""
	I1216 21:08:04.022530   60933 logs.go:282] 0 containers: []
	W1216 21:08:04.022538   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:08:04.022545   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:08:04.022608   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:08:04.075636   60933 cri.go:89] found id: ""
	I1216 21:08:04.075667   60933 logs.go:282] 0 containers: []
	W1216 21:08:04.075677   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:08:04.075688   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:08:04.075707   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:08:04.180622   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:08:04.180653   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:08:04.180671   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:08:04.308091   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:08:04.308146   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:08:04.353240   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:08:04.353294   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:08:04.407919   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:08:04.407955   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1216 21:08:04.423583   60933 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1216 21:08:04.423644   60933 out.go:270] * 
	W1216 21:08:04.423727   60933 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 21:08:04.423749   60933 out.go:270] * 
	W1216 21:08:04.424576   60933 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 21:08:04.428361   60933 out.go:201] 
	W1216 21:08:04.429839   60933 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 21:08:04.429919   60933 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 21:08:04.429958   60933 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 21:08:04.431619   60933 out.go:201] 
	
	
	==> CRI-O <==
	Dec 16 21:14:33 embed-certs-606219 crio[729]: time="2024-12-16 21:14:33.276609456Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383673276577411,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4923593b-3b66-4522-b6ba-3fb40749a8a1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:14:33 embed-certs-606219 crio[729]: time="2024-12-16 21:14:33.277216282Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=570e56e0-bc01-44cb-ba0f-726cb2d1faed name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:14:33 embed-certs-606219 crio[729]: time="2024-12-16 21:14:33.277324384Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=570e56e0-bc01-44cb-ba0f-726cb2d1faed name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:14:33 embed-certs-606219 crio[729]: time="2024-12-16 21:14:33.277618223Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f207b770a60c78f07c7d2caae42124dc7cb80a0f2c2c4d421803607465ed058c,PodSandboxId:f1532bf4e0fd1a6d5a9b45282f434801954d844e6d03b0c7eb98493e6c3ab1c8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734383126995102678,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6437bd61-690b-498d-b35c-e2ef4eb5be97,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e011e81807ce18c97ed73f8ae1e9c158bbad51f79df6c1bb7808de64827f86c,PodSandboxId:2b784c516581038f6ad9f2a5df073e00d80f8e09f756249656389e0f641db76f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383126870609242,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-xhdlz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1b5b585-f005-4885-9809-60f60e03bf04,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da4ed6ea7998ea937d7237a73d1d06d850a02996479db7043cc3186011d15164,PodSandboxId:6b2f68a619579c14faf033125a44d8023e136897c6a996a00b0eaf8ca14dc783,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383126652789811,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-5c74p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
f8e73b6-150f-47cc-9df9-dcf983e5bd6e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af285d29097840b5484fc635cea5ab9e9ffa5c72a4d6ad4cc8eec49901107aa8,PodSandboxId:3ec9e248d92f10aa127b20d4a9ea3a6aef804439831be5a7bb2e6672e84b6676,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt
:1734383126113562135,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-677x9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37810520-4f02-46c4-8eeb-6dc70c859e3e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d76c01ea6554ad7ca7460a3e0b52e675fccc595170707f82e376ab5b53a254d3,PodSandboxId:e3547ba0c045ed6fec7866b719a022bebc0360eaf1307e283f12d9b32813f4a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1734383115507571750
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-606219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 030d95567448159717b54757a6d98e97,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcfec28b8854e887e26ff5e13a923fb0c3fed8905ec63a220a57a76d0df19da2,PodSandboxId:6e976a36e979dfc8f9c5de9c51f86cc89bd0a1f15aeb530b3b7b24e387da4f8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1734383115561
461843,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-606219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b88848a2e234a69d007899565b5bbcce,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3a5c85fda02edeece59bcc01bb3489ae65a10b552e4e3b193037ba8d7a2cd2e,PodSandboxId:de7b88b663bb53fde5baa4a22e5958658645e9fc39a50c8952d0c2dfc640612e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1734383115511342492,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-606219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 528ee518c9057e66ed32f2256a823012,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7bce3fd7741fe3a1170af645de4eb2329619554377cffee90b06f6dbf85a52f,PodSandboxId:bf8a6fd807d690167ff5bf9d84883c3af170e1ea43fa641f6e17de763289daa2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1734383115405402764,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-606219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a485fe61bbc43636caa6b063150a4f07,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ea9639cafb04ef075e0b7522ec597b07b4878836f5fdb90e98b048758325993,PodSandboxId:24f5a79e440b43a9f6694acc89976a3171a8068fb9dcd9ea12c799754ee504b8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1734382826973437312,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-606219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b88848a2e234a69d007899565b5bbcce,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=570e56e0-bc01-44cb-ba0f-726cb2d1faed name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:14:33 embed-certs-606219 crio[729]: time="2024-12-16 21:14:33.322229219Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a62d670b-c9dc-4296-8a02-f7762ba547b5 name=/runtime.v1.RuntimeService/Version
	Dec 16 21:14:33 embed-certs-606219 crio[729]: time="2024-12-16 21:14:33.322361369Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a62d670b-c9dc-4296-8a02-f7762ba547b5 name=/runtime.v1.RuntimeService/Version
	Dec 16 21:14:33 embed-certs-606219 crio[729]: time="2024-12-16 21:14:33.323594350Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c9a8af66-9827-4a80-9f19-a0f1db15eba9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:14:33 embed-certs-606219 crio[729]: time="2024-12-16 21:14:33.324452061Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383673324417503,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c9a8af66-9827-4a80-9f19-a0f1db15eba9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:14:33 embed-certs-606219 crio[729]: time="2024-12-16 21:14:33.325402117Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4809cb76-3d41-4b9b-ae6a-fdc0f1306412 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:14:33 embed-certs-606219 crio[729]: time="2024-12-16 21:14:33.325483195Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4809cb76-3d41-4b9b-ae6a-fdc0f1306412 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:14:33 embed-certs-606219 crio[729]: time="2024-12-16 21:14:33.325683991Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f207b770a60c78f07c7d2caae42124dc7cb80a0f2c2c4d421803607465ed058c,PodSandboxId:f1532bf4e0fd1a6d5a9b45282f434801954d844e6d03b0c7eb98493e6c3ab1c8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734383126995102678,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6437bd61-690b-498d-b35c-e2ef4eb5be97,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e011e81807ce18c97ed73f8ae1e9c158bbad51f79df6c1bb7808de64827f86c,PodSandboxId:2b784c516581038f6ad9f2a5df073e00d80f8e09f756249656389e0f641db76f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383126870609242,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-xhdlz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1b5b585-f005-4885-9809-60f60e03bf04,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da4ed6ea7998ea937d7237a73d1d06d850a02996479db7043cc3186011d15164,PodSandboxId:6b2f68a619579c14faf033125a44d8023e136897c6a996a00b0eaf8ca14dc783,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383126652789811,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-5c74p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
f8e73b6-150f-47cc-9df9-dcf983e5bd6e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af285d29097840b5484fc635cea5ab9e9ffa5c72a4d6ad4cc8eec49901107aa8,PodSandboxId:3ec9e248d92f10aa127b20d4a9ea3a6aef804439831be5a7bb2e6672e84b6676,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt
:1734383126113562135,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-677x9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37810520-4f02-46c4-8eeb-6dc70c859e3e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d76c01ea6554ad7ca7460a3e0b52e675fccc595170707f82e376ab5b53a254d3,PodSandboxId:e3547ba0c045ed6fec7866b719a022bebc0360eaf1307e283f12d9b32813f4a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1734383115507571750
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-606219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 030d95567448159717b54757a6d98e97,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcfec28b8854e887e26ff5e13a923fb0c3fed8905ec63a220a57a76d0df19da2,PodSandboxId:6e976a36e979dfc8f9c5de9c51f86cc89bd0a1f15aeb530b3b7b24e387da4f8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1734383115561
461843,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-606219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b88848a2e234a69d007899565b5bbcce,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3a5c85fda02edeece59bcc01bb3489ae65a10b552e4e3b193037ba8d7a2cd2e,PodSandboxId:de7b88b663bb53fde5baa4a22e5958658645e9fc39a50c8952d0c2dfc640612e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1734383115511342492,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-606219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 528ee518c9057e66ed32f2256a823012,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7bce3fd7741fe3a1170af645de4eb2329619554377cffee90b06f6dbf85a52f,PodSandboxId:bf8a6fd807d690167ff5bf9d84883c3af170e1ea43fa641f6e17de763289daa2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1734383115405402764,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-606219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a485fe61bbc43636caa6b063150a4f07,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ea9639cafb04ef075e0b7522ec597b07b4878836f5fdb90e98b048758325993,PodSandboxId:24f5a79e440b43a9f6694acc89976a3171a8068fb9dcd9ea12c799754ee504b8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1734382826973437312,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-606219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b88848a2e234a69d007899565b5bbcce,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4809cb76-3d41-4b9b-ae6a-fdc0f1306412 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:14:33 embed-certs-606219 crio[729]: time="2024-12-16 21:14:33.370192436Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bb7c2e84-1984-44ef-9878-696f2b0618f6 name=/runtime.v1.RuntimeService/Version
	Dec 16 21:14:33 embed-certs-606219 crio[729]: time="2024-12-16 21:14:33.370309216Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bb7c2e84-1984-44ef-9878-696f2b0618f6 name=/runtime.v1.RuntimeService/Version
	Dec 16 21:14:33 embed-certs-606219 crio[729]: time="2024-12-16 21:14:33.371779749Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=64b49a6f-e19f-47d9-b9fa-8aca0c6f8fd8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:14:33 embed-certs-606219 crio[729]: time="2024-12-16 21:14:33.372413650Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383673372387158,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=64b49a6f-e19f-47d9-b9fa-8aca0c6f8fd8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:14:33 embed-certs-606219 crio[729]: time="2024-12-16 21:14:33.372992468Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1e10dc5a-dc5c-452d-a0c1-249e22192be8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:14:33 embed-certs-606219 crio[729]: time="2024-12-16 21:14:33.373050090Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1e10dc5a-dc5c-452d-a0c1-249e22192be8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:14:33 embed-certs-606219 crio[729]: time="2024-12-16 21:14:33.373487705Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f207b770a60c78f07c7d2caae42124dc7cb80a0f2c2c4d421803607465ed058c,PodSandboxId:f1532bf4e0fd1a6d5a9b45282f434801954d844e6d03b0c7eb98493e6c3ab1c8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734383126995102678,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6437bd61-690b-498d-b35c-e2ef4eb5be97,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e011e81807ce18c97ed73f8ae1e9c158bbad51f79df6c1bb7808de64827f86c,PodSandboxId:2b784c516581038f6ad9f2a5df073e00d80f8e09f756249656389e0f641db76f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383126870609242,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-xhdlz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1b5b585-f005-4885-9809-60f60e03bf04,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da4ed6ea7998ea937d7237a73d1d06d850a02996479db7043cc3186011d15164,PodSandboxId:6b2f68a619579c14faf033125a44d8023e136897c6a996a00b0eaf8ca14dc783,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383126652789811,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-5c74p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
f8e73b6-150f-47cc-9df9-dcf983e5bd6e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af285d29097840b5484fc635cea5ab9e9ffa5c72a4d6ad4cc8eec49901107aa8,PodSandboxId:3ec9e248d92f10aa127b20d4a9ea3a6aef804439831be5a7bb2e6672e84b6676,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt
:1734383126113562135,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-677x9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37810520-4f02-46c4-8eeb-6dc70c859e3e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d76c01ea6554ad7ca7460a3e0b52e675fccc595170707f82e376ab5b53a254d3,PodSandboxId:e3547ba0c045ed6fec7866b719a022bebc0360eaf1307e283f12d9b32813f4a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1734383115507571750
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-606219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 030d95567448159717b54757a6d98e97,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcfec28b8854e887e26ff5e13a923fb0c3fed8905ec63a220a57a76d0df19da2,PodSandboxId:6e976a36e979dfc8f9c5de9c51f86cc89bd0a1f15aeb530b3b7b24e387da4f8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1734383115561
461843,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-606219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b88848a2e234a69d007899565b5bbcce,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3a5c85fda02edeece59bcc01bb3489ae65a10b552e4e3b193037ba8d7a2cd2e,PodSandboxId:de7b88b663bb53fde5baa4a22e5958658645e9fc39a50c8952d0c2dfc640612e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1734383115511342492,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-606219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 528ee518c9057e66ed32f2256a823012,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7bce3fd7741fe3a1170af645de4eb2329619554377cffee90b06f6dbf85a52f,PodSandboxId:bf8a6fd807d690167ff5bf9d84883c3af170e1ea43fa641f6e17de763289daa2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1734383115405402764,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-606219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a485fe61bbc43636caa6b063150a4f07,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ea9639cafb04ef075e0b7522ec597b07b4878836f5fdb90e98b048758325993,PodSandboxId:24f5a79e440b43a9f6694acc89976a3171a8068fb9dcd9ea12c799754ee504b8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1734382826973437312,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-606219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b88848a2e234a69d007899565b5bbcce,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1e10dc5a-dc5c-452d-a0c1-249e22192be8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:14:33 embed-certs-606219 crio[729]: time="2024-12-16 21:14:33.413041660Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=abb33ed4-6c3b-48fe-b2e5-14731d9d96a0 name=/runtime.v1.RuntimeService/Version
	Dec 16 21:14:33 embed-certs-606219 crio[729]: time="2024-12-16 21:14:33.413241391Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=abb33ed4-6c3b-48fe-b2e5-14731d9d96a0 name=/runtime.v1.RuntimeService/Version
	Dec 16 21:14:33 embed-certs-606219 crio[729]: time="2024-12-16 21:14:33.414582514Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c900fd43-8e01-4790-9b44-364a536d774b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:14:33 embed-certs-606219 crio[729]: time="2024-12-16 21:14:33.414968506Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383673414944475,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c900fd43-8e01-4790-9b44-364a536d774b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:14:33 embed-certs-606219 crio[729]: time="2024-12-16 21:14:33.415434637Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=24c94d3c-30ce-442d-962e-89d526e99c1a name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:14:33 embed-certs-606219 crio[729]: time="2024-12-16 21:14:33.415490756Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=24c94d3c-30ce-442d-962e-89d526e99c1a name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:14:33 embed-certs-606219 crio[729]: time="2024-12-16 21:14:33.415671922Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f207b770a60c78f07c7d2caae42124dc7cb80a0f2c2c4d421803607465ed058c,PodSandboxId:f1532bf4e0fd1a6d5a9b45282f434801954d844e6d03b0c7eb98493e6c3ab1c8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734383126995102678,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6437bd61-690b-498d-b35c-e2ef4eb5be97,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e011e81807ce18c97ed73f8ae1e9c158bbad51f79df6c1bb7808de64827f86c,PodSandboxId:2b784c516581038f6ad9f2a5df073e00d80f8e09f756249656389e0f641db76f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383126870609242,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-xhdlz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1b5b585-f005-4885-9809-60f60e03bf04,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da4ed6ea7998ea937d7237a73d1d06d850a02996479db7043cc3186011d15164,PodSandboxId:6b2f68a619579c14faf033125a44d8023e136897c6a996a00b0eaf8ca14dc783,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383126652789811,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-5c74p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
f8e73b6-150f-47cc-9df9-dcf983e5bd6e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af285d29097840b5484fc635cea5ab9e9ffa5c72a4d6ad4cc8eec49901107aa8,PodSandboxId:3ec9e248d92f10aa127b20d4a9ea3a6aef804439831be5a7bb2e6672e84b6676,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt
:1734383126113562135,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-677x9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37810520-4f02-46c4-8eeb-6dc70c859e3e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d76c01ea6554ad7ca7460a3e0b52e675fccc595170707f82e376ab5b53a254d3,PodSandboxId:e3547ba0c045ed6fec7866b719a022bebc0360eaf1307e283f12d9b32813f4a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1734383115507571750
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-606219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 030d95567448159717b54757a6d98e97,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcfec28b8854e887e26ff5e13a923fb0c3fed8905ec63a220a57a76d0df19da2,PodSandboxId:6e976a36e979dfc8f9c5de9c51f86cc89bd0a1f15aeb530b3b7b24e387da4f8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1734383115561
461843,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-606219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b88848a2e234a69d007899565b5bbcce,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3a5c85fda02edeece59bcc01bb3489ae65a10b552e4e3b193037ba8d7a2cd2e,PodSandboxId:de7b88b663bb53fde5baa4a22e5958658645e9fc39a50c8952d0c2dfc640612e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1734383115511342492,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-606219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 528ee518c9057e66ed32f2256a823012,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7bce3fd7741fe3a1170af645de4eb2329619554377cffee90b06f6dbf85a52f,PodSandboxId:bf8a6fd807d690167ff5bf9d84883c3af170e1ea43fa641f6e17de763289daa2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1734383115405402764,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-606219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a485fe61bbc43636caa6b063150a4f07,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ea9639cafb04ef075e0b7522ec597b07b4878836f5fdb90e98b048758325993,PodSandboxId:24f5a79e440b43a9f6694acc89976a3171a8068fb9dcd9ea12c799754ee504b8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1734382826973437312,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-606219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b88848a2e234a69d007899565b5bbcce,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=24c94d3c-30ce-442d-962e-89d526e99c1a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f207b770a60c7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   f1532bf4e0fd1       storage-provisioner
	1e011e81807ce       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   2b784c5165810       coredns-668d6bf9bc-xhdlz
	da4ed6ea7998e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   6b2f68a619579       coredns-668d6bf9bc-5c74p
	af285d2909784       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   9 minutes ago       Running             kube-proxy                0                   3ec9e248d92f1       kube-proxy-677x9
	bcfec28b8854e       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   9 minutes ago       Running             kube-apiserver            2                   6e976a36e979d       kube-apiserver-embed-certs-606219
	b3a5c85fda02e       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   9 minutes ago       Running             kube-scheduler            2                   de7b88b663bb5       kube-scheduler-embed-certs-606219
	d76c01ea6554a       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   9 minutes ago       Running             kube-controller-manager   2                   e3547ba0c045e       kube-controller-manager-embed-certs-606219
	e7bce3fd7741f       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   9 minutes ago       Running             etcd                      2                   bf8a6fd807d69       etcd-embed-certs-606219
	4ea9639cafb04       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   14 minutes ago      Exited              kube-apiserver            1                   24f5a79e440b4       kube-apiserver-embed-certs-606219
	
	
	==> coredns [1e011e81807ce18c97ed73f8ae1e9c158bbad51f79df6c1bb7808de64827f86c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [da4ed6ea7998ea937d7237a73d1d06d850a02996479db7043cc3186011d15164] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-606219
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-606219
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=74e51ab701402ddc00f8ba70f2a2775c7dcd6477
	                    minikube.k8s.io/name=embed-certs-606219
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_16T21_05_21_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Dec 2024 21:05:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-606219
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Dec 2024 21:14:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Dec 2024 21:11:58 +0000   Mon, 16 Dec 2024 21:05:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Dec 2024 21:11:58 +0000   Mon, 16 Dec 2024 21:05:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Dec 2024 21:11:58 +0000   Mon, 16 Dec 2024 21:05:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Dec 2024 21:11:58 +0000   Mon, 16 Dec 2024 21:05:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.151
	  Hostname:    embed-certs-606219
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 03dc9006e7ea4386a7cd370dbe27528e
	  System UUID:                03dc9006-e7ea-4386-a7cd-370dbe27528e
	  Boot ID:                    eab235e9-606a-4e10-b523-f7e56ad03e67
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-5c74p                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m8s
	  kube-system                 coredns-668d6bf9bc-xhdlz                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m8s
	  kube-system                 etcd-embed-certs-606219                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m13s
	  kube-system                 kube-apiserver-embed-certs-606219             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 kube-controller-manager-embed-certs-606219    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 kube-proxy-677x9                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m8s
	  kube-system                 kube-scheduler-embed-certs-606219             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 metrics-server-f79f97bbb-6fxnl                100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m7s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m6s   kube-proxy       
	  Normal  Starting                 9m13s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m13s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m13s  kubelet          Node embed-certs-606219 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m13s  kubelet          Node embed-certs-606219 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m13s  kubelet          Node embed-certs-606219 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m9s   node-controller  Node embed-certs-606219 event: Registered Node embed-certs-606219 in Controller
	
	
	==> dmesg <==
	[  +0.055356] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.050254] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.707115] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.117016] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.561897] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.244326] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.063876] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065176] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.179422] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.169466] systemd-fstab-generator[686]: Ignoring "noauto" option for root device
	[  +0.314139] systemd-fstab-generator[719]: Ignoring "noauto" option for root device
	[  +4.632743] systemd-fstab-generator[813]: Ignoring "noauto" option for root device
	[  +0.066196] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.139055] systemd-fstab-generator[937]: Ignoring "noauto" option for root device
	[  +5.617115] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.047675] kauditd_printk_skb: 85 callbacks suppressed
	[Dec16 21:05] systemd-fstab-generator[2688]: Ignoring "noauto" option for root device
	[  +0.074849] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.006021] systemd-fstab-generator[3028]: Ignoring "noauto" option for root device
	[  +0.073632] kauditd_printk_skb: 54 callbacks suppressed
	[  +4.377690] systemd-fstab-generator[3149]: Ignoring "noauto" option for root device
	[  +1.061221] kauditd_printk_skb: 34 callbacks suppressed
	[  +6.581801] kauditd_printk_skb: 62 callbacks suppressed
	
	
	==> etcd [e7bce3fd7741fe3a1170af645de4eb2329619554377cffee90b06f6dbf85a52f] <==
	{"level":"info","ts":"2024-12-16T21:05:16.482296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a728e92c015ac5ad became pre-candidate at term 1"}
	{"level":"info","ts":"2024-12-16T21:05:16.482397Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a728e92c015ac5ad received MsgPreVoteResp from a728e92c015ac5ad at term 1"}
	{"level":"info","ts":"2024-12-16T21:05:16.482411Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a728e92c015ac5ad became candidate at term 2"}
	{"level":"info","ts":"2024-12-16T21:05:16.482440Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a728e92c015ac5ad received MsgVoteResp from a728e92c015ac5ad at term 2"}
	{"level":"info","ts":"2024-12-16T21:05:16.482451Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a728e92c015ac5ad became leader at term 2"}
	{"level":"info","ts":"2024-12-16T21:05:16.482458Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a728e92c015ac5ad elected leader a728e92c015ac5ad at term 2"}
	{"level":"info","ts":"2024-12-16T21:05:16.484443Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"a728e92c015ac5ad","local-member-attributes":"{Name:embed-certs-606219 ClientURLs:[https://192.168.61.151:2379]}","request-path":"/0/members/a728e92c015ac5ad/attributes","cluster-id":"cd5f07220b22f85e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-16T21:05:16.484529Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-16T21:05:16.485009Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-16T21:05:16.485288Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-16T21:05:16.486917Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-16T21:05:16.487540Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-16T21:05:16.488272Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.151:2379"}
	{"level":"info","ts":"2024-12-16T21:05:16.488405Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-16T21:05:16.488435Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-16T21:05:16.489022Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cd5f07220b22f85e","local-member-id":"a728e92c015ac5ad","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-16T21:05:16.489274Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-16T21:05:16.489348Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-16T21:05:16.490265Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-16T21:05:26.524047Z","caller":"traceutil/trace.go:171","msg":"trace[568226517] transaction","detail":"{read_only:false; response_revision:366; number_of_response:1; }","duration":"108.934274ms","start":"2024-12-16T21:05:26.415084Z","end":"2024-12-16T21:05:26.524018Z","steps":["trace[568226517] 'process raft request'  (duration: 63.57113ms)","trace[568226517] 'compare'  (duration: 44.975703ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-16T21:05:26.524190Z","caller":"traceutil/trace.go:171","msg":"trace[424445992] linearizableReadLoop","detail":"{readStateIndex:375; appliedIndex:373; }","duration":"108.374959ms","start":"2024-12-16T21:05:26.415367Z","end":"2024-12-16T21:05:26.523742Z","steps":["trace[424445992] 'read index received'  (duration: 10.832877ms)","trace[424445992] 'applied index is now lower than readState.Index'  (duration: 97.54135ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-16T21:05:26.525378Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.945969ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/view\" limit:1 ","response":"range_response_count:1 size:2157"}
	{"level":"info","ts":"2024-12-16T21:05:26.525412Z","caller":"traceutil/trace.go:171","msg":"trace[856640821] range","detail":"{range_begin:/registry/clusterroles/view; range_end:; response_count:1; response_revision:366; }","duration":"110.055635ms","start":"2024-12-16T21:05:26.415347Z","end":"2024-12-16T21:05:26.525402Z","steps":["trace[856640821] 'agreement among raft nodes before linearized reading'  (duration: 109.930143ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T21:05:26.531944Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.382844ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/metrics-server:system:auth-delegator\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-16T21:05:26.532003Z","caller":"traceutil/trace.go:171","msg":"trace[1123035585] range","detail":"{range_begin:/registry/clusterrolebindings/metrics-server:system:auth-delegator; range_end:; response_count:0; response_revision:367; }","duration":"112.47339ms","start":"2024-12-16T21:05:26.419519Z","end":"2024-12-16T21:05:26.531993Z","steps":["trace[1123035585] 'agreement among raft nodes before linearized reading'  (duration: 112.338371ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:14:33 up 14 min,  0 users,  load average: 0.41, 0.35, 0.23
	Linux embed-certs-606219 5.10.207 #1 SMP Thu Dec 12 23:38:00 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4ea9639cafb04ef075e0b7522ec597b07b4878836f5fdb90e98b048758325993] <==
	W1216 21:05:07.685577       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:07.876405       1 logging.go:55] [core] [Channel #18 SubChannel #19]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:07.893600       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:08.174204       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:11.227413       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:11.662920       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:11.947981       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:11.958953       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:12.227327       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:12.393366       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:12.408249       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:12.426768       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:12.496363       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:12.516644       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:12.532288       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:12.545081       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:12.679580       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:12.686241       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:12.719670       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:12.721098       1 logging.go:55] [core] [Channel #9 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:12.759430       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:12.796210       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:12.808210       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:12.975388       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:12.975412       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [bcfec28b8854e887e26ff5e13a923fb0c3fed8905ec63a220a57a76d0df19da2] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1216 21:10:18.911519       1 handler_proxy.go:99] no RequestInfo found in the context
	E1216 21:10:18.911560       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1216 21:10:18.912751       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1216 21:10:18.912813       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1216 21:11:18.913091       1 handler_proxy.go:99] no RequestInfo found in the context
	E1216 21:11:18.913287       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1216 21:11:18.913671       1 handler_proxy.go:99] no RequestInfo found in the context
	E1216 21:11:18.913846       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1216 21:11:18.914510       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1216 21:11:18.915572       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1216 21:13:18.915055       1 handler_proxy.go:99] no RequestInfo found in the context
	E1216 21:13:18.915469       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1216 21:13:18.916081       1 handler_proxy.go:99] no RequestInfo found in the context
	E1216 21:13:18.916215       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1216 21:13:18.917311       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1216 21:13:18.917323       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [d76c01ea6554ad7ca7460a3e0b52e675fccc595170707f82e376ab5b53a254d3] <==
	E1216 21:09:24.490944       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:09:24.521322       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:09:54.498442       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:09:54.531935       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:10:24.507764       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:10:24.543092       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:10:54.514930       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:10:54.553286       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:11:24.521486       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:11:24.561190       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1216 21:11:27.779687       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="202.757µs"
	I1216 21:11:38.779294       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="70.456µs"
	E1216 21:11:54.527262       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:11:54.569285       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1216 21:11:58.600096       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="embed-certs-606219"
	E1216 21:12:24.534238       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:12:24.577268       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:12:54.540461       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:12:54.586444       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:13:24.549303       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:13:24.595409       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:13:54.556998       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:13:54.603235       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:14:24.564161       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:14:24.611205       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [af285d29097840b5484fc635cea5ab9e9ffa5c72a4d6ad4cc8eec49901107aa8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1216 21:05:27.385466       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1216 21:05:27.398312       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.61.151"]
	E1216 21:05:27.398631       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 21:05:27.448405       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1216 21:05:27.448472       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1216 21:05:27.448505       1 server_linux.go:170] "Using iptables Proxier"
	I1216 21:05:27.451919       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 21:05:27.452359       1 server.go:497] "Version info" version="v1.32.0"
	I1216 21:05:27.452389       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 21:05:27.454751       1 config.go:199] "Starting service config controller"
	I1216 21:05:27.454797       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1216 21:05:27.454820       1 config.go:105] "Starting endpoint slice config controller"
	I1216 21:05:27.454824       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1216 21:05:27.455585       1 config.go:329] "Starting node config controller"
	I1216 21:05:27.455615       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1216 21:05:27.554963       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1216 21:05:27.555071       1 shared_informer.go:320] Caches are synced for service config
	I1216 21:05:27.556020       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b3a5c85fda02edeece59bcc01bb3489ae65a10b552e4e3b193037ba8d7a2cd2e] <==
	W1216 21:05:18.841033       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1216 21:05:18.841579       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 21:05:18.846825       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1216 21:05:18.846898       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 21:05:18.866015       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1216 21:05:18.866071       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 21:05:18.868512       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1216 21:05:18.869510       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 21:05:18.891764       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1216 21:05:18.892244       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 21:05:18.936700       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1216 21:05:18.936802       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1216 21:05:19.016309       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1216 21:05:19.016412       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 21:05:19.019988       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1216 21:05:19.020053       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1216 21:05:19.082030       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1216 21:05:19.082084       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 21:05:19.136485       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1216 21:05:19.136583       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 21:05:19.188694       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1216 21:05:19.188895       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 21:05:19.219966       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1216 21:05:19.220097       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1216 21:05:22.434428       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 16 21:13:30 embed-certs-606219 kubelet[3035]: E1216 21:13:30.764056    3035 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-6fxnl" podUID="828f2925-402c-4f49-89e1-354e082c0de4"
	Dec 16 21:13:30 embed-certs-606219 kubelet[3035]: E1216 21:13:30.972736    3035 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383610972451854,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:13:30 embed-certs-606219 kubelet[3035]: E1216 21:13:30.972924    3035 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383610972451854,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:13:40 embed-certs-606219 kubelet[3035]: E1216 21:13:40.974622    3035 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383620974102898,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:13:40 embed-certs-606219 kubelet[3035]: E1216 21:13:40.975076    3035 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383620974102898,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:13:42 embed-certs-606219 kubelet[3035]: E1216 21:13:42.762438    3035 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-6fxnl" podUID="828f2925-402c-4f49-89e1-354e082c0de4"
	Dec 16 21:13:50 embed-certs-606219 kubelet[3035]: E1216 21:13:50.977460    3035 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383630977194526,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:13:50 embed-certs-606219 kubelet[3035]: E1216 21:13:50.977518    3035 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383630977194526,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:13:54 embed-certs-606219 kubelet[3035]: E1216 21:13:54.761649    3035 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-6fxnl" podUID="828f2925-402c-4f49-89e1-354e082c0de4"
	Dec 16 21:14:00 embed-certs-606219 kubelet[3035]: E1216 21:14:00.979462    3035 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383640979061333,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:14:00 embed-certs-606219 kubelet[3035]: E1216 21:14:00.979513    3035 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383640979061333,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:14:08 embed-certs-606219 kubelet[3035]: E1216 21:14:08.768361    3035 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-6fxnl" podUID="828f2925-402c-4f49-89e1-354e082c0de4"
	Dec 16 21:14:10 embed-certs-606219 kubelet[3035]: E1216 21:14:10.989594    3035 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383650988420465,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:14:10 embed-certs-606219 kubelet[3035]: E1216 21:14:10.990286    3035 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383650988420465,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:14:19 embed-certs-606219 kubelet[3035]: E1216 21:14:19.762384    3035 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-6fxnl" podUID="828f2925-402c-4f49-89e1-354e082c0de4"
	Dec 16 21:14:20 embed-certs-606219 kubelet[3035]: E1216 21:14:20.805934    3035 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 16 21:14:20 embed-certs-606219 kubelet[3035]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 16 21:14:20 embed-certs-606219 kubelet[3035]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 16 21:14:20 embed-certs-606219 kubelet[3035]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 16 21:14:20 embed-certs-606219 kubelet[3035]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 16 21:14:20 embed-certs-606219 kubelet[3035]: E1216 21:14:20.992476    3035 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383660991997078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:14:20 embed-certs-606219 kubelet[3035]: E1216 21:14:20.992753    3035 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383660991997078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:14:30 embed-certs-606219 kubelet[3035]: E1216 21:14:30.995101    3035 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383670994575391,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:14:30 embed-certs-606219 kubelet[3035]: E1216 21:14:30.995810    3035 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383670994575391,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:14:32 embed-certs-606219 kubelet[3035]: E1216 21:14:32.762917    3035 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-6fxnl" podUID="828f2925-402c-4f49-89e1-354e082c0de4"
	
	
	==> storage-provisioner [f207b770a60c78f07c7d2caae42124dc7cb80a0f2c2c4d421803607465ed058c] <==
	I1216 21:05:27.318271       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 21:05:27.351928       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 21:05:27.352251       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1216 21:05:27.368455       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 21:05:27.369185       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ce61853c-bbb0-4582-9389-51e55aaa1cf4", APIVersion:"v1", ResourceVersion:"394", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-606219_226d71a5-5f7f-477e-8a29-66b3064d5f06 became leader
	I1216 21:05:27.369267       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-606219_226d71a5-5f7f-477e-8a29-66b3064d5f06!
	I1216 21:05:27.469430       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-606219_226d71a5-5f7f-477e-8a29-66b3064d5f06!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-606219 -n embed-certs-606219
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-606219 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-6fxnl
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-606219 describe pod metrics-server-f79f97bbb-6fxnl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-606219 describe pod metrics-server-f79f97bbb-6fxnl: exit status 1 (66.027856ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-6fxnl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-606219 describe pod metrics-server-f79f97bbb-6fxnl: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
E1216 21:10:50.481617   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/functional-782219/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
E1216 21:12:13.884283   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
E1216 21:15:50.481374   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/functional-782219/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-847766 -n old-k8s-version-847766
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-847766 -n old-k8s-version-847766: exit status 2 (246.487193ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "old-k8s-version-847766" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-847766 -n old-k8s-version-847766
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-847766 -n old-k8s-version-847766: exit status 2 (237.029428ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-847766 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-847766 logs -n 25: (1.579066979s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p stopped-upgrade-976873                              | stopped-upgrade-976873       | jenkins | v1.34.0 | 16 Dec 24 20:49 UTC | 16 Dec 24 20:50 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-560677                           | kubernetes-upgrade-560677    | jenkins | v1.34.0 | 16 Dec 24 20:50 UTC | 16 Dec 24 20:50 UTC |
	| start   | -p no-preload-232338                                   | no-preload-232338            | jenkins | v1.34.0 | 16 Dec 24 20:50 UTC | 16 Dec 24 20:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-976873                              | stopped-upgrade-976873       | jenkins | v1.34.0 | 16 Dec 24 20:50 UTC | 16 Dec 24 20:50 UTC |
	| start   | -p embed-certs-606219                                  | embed-certs-606219           | jenkins | v1.34.0 | 16 Dec 24 20:50 UTC | 16 Dec 24 20:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-270954                              | cert-expiration-270954       | jenkins | v1.34.0 | 16 Dec 24 20:51 UTC | 16 Dec 24 20:51 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-606219            | embed-certs-606219           | jenkins | v1.34.0 | 16 Dec 24 20:51 UTC | 16 Dec 24 20:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-606219                                  | embed-certs-606219           | jenkins | v1.34.0 | 16 Dec 24 20:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-270954                              | cert-expiration-270954       | jenkins | v1.34.0 | 16 Dec 24 20:51 UTC | 16 Dec 24 20:51 UTC |
	| delete  | -p                                                     | disable-driver-mounts-384008 | jenkins | v1.34.0 | 16 Dec 24 20:51 UTC | 16 Dec 24 20:51 UTC |
	|         | disable-driver-mounts-384008                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-327790 | jenkins | v1.34.0 | 16 Dec 24 20:51 UTC | 16 Dec 24 20:52 UTC |
	|         | default-k8s-diff-port-327790                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-232338             | no-preload-232338            | jenkins | v1.34.0 | 16 Dec 24 20:52 UTC | 16 Dec 24 20:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-232338                                   | no-preload-232338            | jenkins | v1.34.0 | 16 Dec 24 20:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-327790  | default-k8s-diff-port-327790 | jenkins | v1.34.0 | 16 Dec 24 20:52 UTC | 16 Dec 24 20:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-327790 | jenkins | v1.34.0 | 16 Dec 24 20:52 UTC |                     |
	|         | default-k8s-diff-port-327790                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-847766        | old-k8s-version-847766       | jenkins | v1.34.0 | 16 Dec 24 20:53 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-606219                 | embed-certs-606219           | jenkins | v1.34.0 | 16 Dec 24 20:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-606219                                  | embed-certs-606219           | jenkins | v1.34.0 | 16 Dec 24 20:54 UTC | 16 Dec 24 21:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-232338                  | no-preload-232338            | jenkins | v1.34.0 | 16 Dec 24 20:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-232338                                   | no-preload-232338            | jenkins | v1.34.0 | 16 Dec 24 20:54 UTC | 16 Dec 24 21:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-327790       | default-k8s-diff-port-327790 | jenkins | v1.34.0 | 16 Dec 24 20:55 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-847766                              | old-k8s-version-847766       | jenkins | v1.34.0 | 16 Dec 24 20:55 UTC | 16 Dec 24 20:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-327790 | jenkins | v1.34.0 | 16 Dec 24 20:55 UTC | 16 Dec 24 21:04 UTC |
	|         | default-k8s-diff-port-327790                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-847766             | old-k8s-version-847766       | jenkins | v1.34.0 | 16 Dec 24 20:55 UTC | 16 Dec 24 20:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-847766                              | old-k8s-version-847766       | jenkins | v1.34.0 | 16 Dec 24 20:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 20:55:34
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 20:55:34.390724   60933 out.go:345] Setting OutFile to fd 1 ...
	I1216 20:55:34.390973   60933 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:55:34.390982   60933 out.go:358] Setting ErrFile to fd 2...
	I1216 20:55:34.390986   60933 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:55:34.391166   60933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-7083/.minikube/bin
	I1216 20:55:34.391763   60933 out.go:352] Setting JSON to false
	I1216 20:55:34.392611   60933 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5879,"bootTime":1734376655,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 20:55:34.392675   60933 start.go:139] virtualization: kvm guest
	I1216 20:55:34.394822   60933 out.go:177] * [old-k8s-version-847766] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1216 20:55:34.396184   60933 notify.go:220] Checking for updates...
	I1216 20:55:34.396201   60933 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 20:55:34.397724   60933 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 20:55:34.399130   60933 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 20:55:34.400470   60933 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-7083/.minikube
	I1216 20:55:34.401934   60933 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 20:55:34.403341   60933 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 20:55:34.405179   60933 config.go:182] Loaded profile config "old-k8s-version-847766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1216 20:55:34.405571   60933 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:55:34.405650   60933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:55:34.421052   60933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41215
	I1216 20:55:34.421523   60933 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:55:34.422018   60933 main.go:141] libmachine: Using API Version  1
	I1216 20:55:34.422035   60933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:55:34.422373   60933 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:55:34.422646   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:55:34.424565   60933 out.go:177] * Kubernetes 1.32.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.0
	I1216 20:55:34.426088   60933 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 20:55:34.426419   60933 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:55:34.426474   60933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:55:34.441375   60933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36915
	I1216 20:55:34.441833   60933 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:55:34.442327   60933 main.go:141] libmachine: Using API Version  1
	I1216 20:55:34.442349   60933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:55:34.442658   60933 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:55:34.442852   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:55:34.480512   60933 out.go:177] * Using the kvm2 driver based on existing profile
	I1216 20:55:34.481972   60933 start.go:297] selected driver: kvm2
	I1216 20:55:34.481988   60933 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-847766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-847766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 20:55:34.482125   60933 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 20:55:34.482826   60933 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 20:55:34.482907   60933 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20091-7083/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1216 20:55:34.498561   60933 install.go:137] /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1216 20:55:34.498953   60933 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 20:55:34.498981   60933 cni.go:84] Creating CNI manager for ""
	I1216 20:55:34.499022   60933 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 20:55:34.499060   60933 start.go:340] cluster config:
	{Name:old-k8s-version-847766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-847766 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 20:55:34.499164   60933 iso.go:125] acquiring lock: {Name:mk60ed2ba7ed00047edacd09f4f6bf84214f0831 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 20:55:34.501128   60933 out.go:177] * Starting "old-k8s-version-847766" primary control-plane node in "old-k8s-version-847766" cluster
	I1216 20:55:29.827520   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:55:32.899553   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:55:30.468027   60829 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 20:55:30.468071   60829 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1216 20:55:30.468079   60829 cache.go:56] Caching tarball of preloaded images
	I1216 20:55:30.468192   60829 preload.go:172] Found /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 20:55:30.468206   60829 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1216 20:55:30.468310   60829 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/config.json ...
	I1216 20:55:30.468540   60829 start.go:360] acquireMachinesLock for default-k8s-diff-port-327790: {Name:mk014ce1133f8d018fee1f78c9c31a354da6dd77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 20:55:34.502579   60933 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1216 20:55:34.502609   60933 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1216 20:55:34.502615   60933 cache.go:56] Caching tarball of preloaded images
	I1216 20:55:34.502716   60933 preload.go:172] Found /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 20:55:34.502731   60933 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1216 20:55:34.502823   60933 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/config.json ...
	I1216 20:55:34.503011   60933 start.go:360] acquireMachinesLock for old-k8s-version-847766: {Name:mk014ce1133f8d018fee1f78c9c31a354da6dd77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 20:55:38.979556   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:55:42.051532   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:55:48.131588   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:55:51.203568   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:55:57.283622   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:00.355490   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:06.435543   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:09.507559   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:15.587526   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:18.659657   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:24.739528   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:27.811498   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:33.891518   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:36.963554   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:43.043553   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:46.115578   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:52.195583   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:55.267507   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:01.347591   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:04.419562   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:10.499479   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:13.571540   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:19.651541   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:22.723545   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:28.803551   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:31.875527   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:37.955563   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:41.027520   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:47.107494   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:50.179566   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:56.259550   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:59.331540   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:05.411562   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:08.483592   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:14.563574   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:17.635522   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:23.715548   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:26.787559   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:32.867539   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:35.939502   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:42.019562   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:45.091545   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:51.171521   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:54.243542   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:57.248710   60421 start.go:364] duration metric: took 4m14.403979547s to acquireMachinesLock for "no-preload-232338"
	I1216 20:58:57.248796   60421 start.go:96] Skipping create...Using existing machine configuration
	I1216 20:58:57.248804   60421 fix.go:54] fixHost starting: 
	I1216 20:58:57.249232   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:58:57.249288   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:58:57.264905   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39773
	I1216 20:58:57.265423   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:58:57.265982   60421 main.go:141] libmachine: Using API Version  1
	I1216 20:58:57.266005   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:58:57.266396   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:58:57.266636   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:58:57.266807   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetState
	I1216 20:58:57.268705   60421 fix.go:112] recreateIfNeeded on no-preload-232338: state=Stopped err=<nil>
	I1216 20:58:57.268730   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	W1216 20:58:57.268918   60421 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 20:58:57.270855   60421 out.go:177] * Restarting existing kvm2 VM for "no-preload-232338" ...
	I1216 20:58:57.272142   60421 main.go:141] libmachine: (no-preload-232338) Calling .Start
	I1216 20:58:57.272374   60421 main.go:141] libmachine: (no-preload-232338) Ensuring networks are active...
	I1216 20:58:57.273245   60421 main.go:141] libmachine: (no-preload-232338) Ensuring network default is active
	I1216 20:58:57.273660   60421 main.go:141] libmachine: (no-preload-232338) Ensuring network mk-no-preload-232338 is active
	I1216 20:58:57.274025   60421 main.go:141] libmachine: (no-preload-232338) Getting domain xml...
	I1216 20:58:57.274673   60421 main.go:141] libmachine: (no-preload-232338) Creating domain...
	I1216 20:58:57.245632   60215 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 20:58:57.245753   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetMachineName
	I1216 20:58:57.246111   60215 buildroot.go:166] provisioning hostname "embed-certs-606219"
	I1216 20:58:57.246149   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetMachineName
	I1216 20:58:57.246399   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 20:58:57.248517   60215 machine.go:96] duration metric: took 4m37.414570479s to provisionDockerMachine
	I1216 20:58:57.248579   60215 fix.go:56] duration metric: took 4m37.437232743s for fixHost
	I1216 20:58:57.248587   60215 start.go:83] releasing machines lock for "embed-certs-606219", held for 4m37.437262865s
	W1216 20:58:57.248614   60215 start.go:714] error starting host: provision: host is not running
	W1216 20:58:57.248791   60215 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1216 20:58:57.248801   60215 start.go:729] Will try again in 5 seconds ...
	I1216 20:58:58.506521   60421 main.go:141] libmachine: (no-preload-232338) Waiting to get IP...
	I1216 20:58:58.507302   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:58:58.507627   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:58:58.507699   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:58:58.507613   61660 retry.go:31] will retry after 230.281045ms: waiting for machine to come up
	I1216 20:58:58.739343   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:58:58.739781   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:58:58.739804   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:58:58.739741   61660 retry.go:31] will retry after 323.962271ms: waiting for machine to come up
	I1216 20:58:59.065388   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:58:59.065856   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:58:59.065884   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:58:59.065816   61660 retry.go:31] will retry after 364.058481ms: waiting for machine to come up
	I1216 20:58:59.431290   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:58:59.431680   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:58:59.431707   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:58:59.431631   61660 retry.go:31] will retry after 569.845721ms: waiting for machine to come up
	I1216 20:59:00.003562   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:00.004030   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:00.004093   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:00.003988   61660 retry.go:31] will retry after 728.729909ms: waiting for machine to come up
	I1216 20:59:00.733954   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:00.734450   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:00.734482   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:00.734388   61660 retry.go:31] will retry after 679.479889ms: waiting for machine to come up
	I1216 20:59:01.415289   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:01.415739   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:01.415763   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:01.415690   61660 retry.go:31] will retry after 1.136560245s: waiting for machine to come up
	I1216 20:59:02.554094   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:02.554523   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:02.554548   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:02.554470   61660 retry.go:31] will retry after 1.299578742s: waiting for machine to come up
	I1216 20:59:02.250499   60215 start.go:360] acquireMachinesLock for embed-certs-606219: {Name:mk014ce1133f8d018fee1f78c9c31a354da6dd77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 20:59:03.855999   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:03.856366   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:03.856399   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:03.856300   61660 retry.go:31] will retry after 1.761269163s: waiting for machine to come up
	I1216 20:59:05.620383   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:05.620837   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:05.620858   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:05.620818   61660 retry.go:31] will retry after 2.100894301s: waiting for machine to come up
	I1216 20:59:07.723931   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:07.724300   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:07.724322   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:07.724273   61660 retry.go:31] will retry after 2.57501483s: waiting for machine to come up
	I1216 20:59:10.302185   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:10.302766   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:10.302802   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:10.302706   61660 retry.go:31] will retry after 2.68456895s: waiting for machine to come up
	I1216 20:59:17.060397   60829 start.go:364] duration metric: took 3m46.591813882s to acquireMachinesLock for "default-k8s-diff-port-327790"
	I1216 20:59:17.060456   60829 start.go:96] Skipping create...Using existing machine configuration
	I1216 20:59:17.060462   60829 fix.go:54] fixHost starting: 
	I1216 20:59:17.060878   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:59:17.060935   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:59:17.079226   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41365
	I1216 20:59:17.079715   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:59:17.080173   60829 main.go:141] libmachine: Using API Version  1
	I1216 20:59:17.080202   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:59:17.080554   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:59:17.080731   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:59:17.080873   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetState
	I1216 20:59:17.082368   60829 fix.go:112] recreateIfNeeded on default-k8s-diff-port-327790: state=Stopped err=<nil>
	I1216 20:59:17.082399   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	W1216 20:59:17.082570   60829 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 20:59:17.085104   60829 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-327790" ...
	I1216 20:59:12.988787   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:12.989140   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:12.989172   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:12.989098   61660 retry.go:31] will retry after 2.793178881s: waiting for machine to come up
	I1216 20:59:15.786011   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.786518   60421 main.go:141] libmachine: (no-preload-232338) Found IP for machine: 192.168.50.240
	I1216 20:59:15.786540   60421 main.go:141] libmachine: (no-preload-232338) Reserving static IP address...
	I1216 20:59:15.786564   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has current primary IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.786948   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "no-preload-232338", mac: "52:54:00:07:00:29", ip: "192.168.50.240"} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:15.786983   60421 main.go:141] libmachine: (no-preload-232338) DBG | skip adding static IP to network mk-no-preload-232338 - found existing host DHCP lease matching {name: "no-preload-232338", mac: "52:54:00:07:00:29", ip: "192.168.50.240"}
	I1216 20:59:15.786995   60421 main.go:141] libmachine: (no-preload-232338) Reserved static IP address: 192.168.50.240
	I1216 20:59:15.787009   60421 main.go:141] libmachine: (no-preload-232338) Waiting for SSH to be available...
	I1216 20:59:15.787022   60421 main.go:141] libmachine: (no-preload-232338) DBG | Getting to WaitForSSH function...
	I1216 20:59:15.789175   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.789509   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:15.789542   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.789633   60421 main.go:141] libmachine: (no-preload-232338) DBG | Using SSH client type: external
	I1216 20:59:15.789674   60421 main.go:141] libmachine: (no-preload-232338) DBG | Using SSH private key: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa (-rw-------)
	I1216 20:59:15.789709   60421 main.go:141] libmachine: (no-preload-232338) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1216 20:59:15.789718   60421 main.go:141] libmachine: (no-preload-232338) DBG | About to run SSH command:
	I1216 20:59:15.789726   60421 main.go:141] libmachine: (no-preload-232338) DBG | exit 0
	I1216 20:59:15.915980   60421 main.go:141] libmachine: (no-preload-232338) DBG | SSH cmd err, output: <nil>: 
	I1216 20:59:15.916473   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetConfigRaw
	I1216 20:59:15.917088   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetIP
	I1216 20:59:15.919782   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.920161   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:15.920192   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.920408   60421 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/config.json ...
	I1216 20:59:15.920636   60421 machine.go:93] provisionDockerMachine start ...
	I1216 20:59:15.920654   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:59:15.920875   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:15.923221   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.923623   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:15.923650   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.923784   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:15.923971   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:15.924107   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:15.924246   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:15.924395   60421 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:15.924715   60421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.240 22 <nil> <nil>}
	I1216 20:59:15.924729   60421 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 20:59:16.032079   60421 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1216 20:59:16.032108   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetMachineName
	I1216 20:59:16.032397   60421 buildroot.go:166] provisioning hostname "no-preload-232338"
	I1216 20:59:16.032423   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetMachineName
	I1216 20:59:16.032649   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:16.035467   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.035798   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.035826   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.036003   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:16.036184   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.036335   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.036494   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:16.036679   60421 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:16.036847   60421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.240 22 <nil> <nil>}
	I1216 20:59:16.036859   60421 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-232338 && echo "no-preload-232338" | sudo tee /etc/hostname
	I1216 20:59:16.161958   60421 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-232338
	
	I1216 20:59:16.161996   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:16.164585   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.165086   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.165130   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.165370   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:16.165578   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.165746   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.165877   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:16.166015   60421 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:16.166188   60421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.240 22 <nil> <nil>}
	I1216 20:59:16.166204   60421 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-232338' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-232338/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-232338' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 20:59:16.285329   60421 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 20:59:16.285374   60421 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20091-7083/.minikube CaCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20091-7083/.minikube}
	I1216 20:59:16.285407   60421 buildroot.go:174] setting up certificates
	I1216 20:59:16.285422   60421 provision.go:84] configureAuth start
	I1216 20:59:16.285432   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetMachineName
	I1216 20:59:16.285764   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetIP
	I1216 20:59:16.288773   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.289161   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.289192   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.289405   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:16.291687   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.292042   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.292072   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.292190   60421 provision.go:143] copyHostCerts
	I1216 20:59:16.292260   60421 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem, removing ...
	I1216 20:59:16.292274   60421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem
	I1216 20:59:16.292343   60421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem (1123 bytes)
	I1216 20:59:16.292470   60421 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem, removing ...
	I1216 20:59:16.292481   60421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem
	I1216 20:59:16.292508   60421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem (1679 bytes)
	I1216 20:59:16.292563   60421 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem, removing ...
	I1216 20:59:16.292570   60421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem
	I1216 20:59:16.292590   60421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem (1082 bytes)
	I1216 20:59:16.292649   60421 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem org=jenkins.no-preload-232338 san=[127.0.0.1 192.168.50.240 localhost minikube no-preload-232338]
	I1216 20:59:16.407096   60421 provision.go:177] copyRemoteCerts
	I1216 20:59:16.407187   60421 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 20:59:16.407227   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:16.410400   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.410725   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.410755   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.410977   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:16.411188   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.411437   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:16.411618   60421 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 20:59:16.498456   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 20:59:16.525297   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 20:59:16.551135   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 20:59:16.576040   60421 provision.go:87] duration metric: took 290.601941ms to configureAuth
	I1216 20:59:16.576074   60421 buildroot.go:189] setting minikube options for container-runtime
	I1216 20:59:16.576288   60421 config.go:182] Loaded profile config "no-preload-232338": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 20:59:16.576396   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:16.579169   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.579607   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.579641   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.579795   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:16.580016   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.580165   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.580311   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:16.580467   60421 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:16.580629   60421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.240 22 <nil> <nil>}
	I1216 20:59:16.580643   60421 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 20:59:16.816973   60421 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 20:59:16.816998   60421 machine.go:96] duration metric: took 896.349056ms to provisionDockerMachine
	I1216 20:59:16.817010   60421 start.go:293] postStartSetup for "no-preload-232338" (driver="kvm2")
	I1216 20:59:16.817030   60421 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 20:59:16.817044   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:59:16.817427   60421 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 20:59:16.817454   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:16.820182   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.820550   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.820578   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.820713   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:16.820914   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.821096   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:16.821274   60421 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 20:59:16.906513   60421 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 20:59:16.911314   60421 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 20:59:16.911346   60421 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/addons for local assets ...
	I1216 20:59:16.911482   60421 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/files for local assets ...
	I1216 20:59:16.911589   60421 filesync.go:149] local asset: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem -> 142542.pem in /etc/ssl/certs
	I1216 20:59:16.911720   60421 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 20:59:16.921890   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /etc/ssl/certs/142542.pem (1708 bytes)
	I1216 20:59:16.947114   60421 start.go:296] duration metric: took 130.089628ms for postStartSetup
	I1216 20:59:16.947192   60421 fix.go:56] duration metric: took 19.698385497s for fixHost
	I1216 20:59:16.947229   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:16.950156   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.950543   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.950575   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.950780   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:16.950996   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.951199   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.951394   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:16.951604   60421 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:16.951829   60421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.240 22 <nil> <nil>}
	I1216 20:59:16.951843   60421 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 20:59:17.060233   60421 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734382757.032597424
	
	I1216 20:59:17.060258   60421 fix.go:216] guest clock: 1734382757.032597424
	I1216 20:59:17.060265   60421 fix.go:229] Guest: 2024-12-16 20:59:17.032597424 +0000 UTC Remote: 2024-12-16 20:59:16.947203535 +0000 UTC m=+274.247918927 (delta=85.393889ms)
	I1216 20:59:17.060290   60421 fix.go:200] guest clock delta is within tolerance: 85.393889ms
	I1216 20:59:17.060294   60421 start.go:83] releasing machines lock for "no-preload-232338", held for 19.811539815s
	I1216 20:59:17.060318   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:59:17.060636   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetIP
	I1216 20:59:17.063346   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:17.063742   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:17.063764   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:17.063900   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:59:17.064419   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:59:17.064647   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:59:17.064766   60421 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 20:59:17.064804   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:17.064897   60421 ssh_runner.go:195] Run: cat /version.json
	I1216 20:59:17.064923   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:17.067687   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:17.067897   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:17.068129   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:17.068166   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:17.068314   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:17.068318   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:17.068343   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:17.068491   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:17.068573   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:17.068754   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:17.068778   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:17.068914   60421 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 20:59:17.069085   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:17.069229   60421 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 20:59:17.149502   60421 ssh_runner.go:195] Run: systemctl --version
	I1216 20:59:17.184981   60421 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 20:59:17.335267   60421 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 20:59:17.344316   60421 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 20:59:17.344381   60421 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 20:59:17.362422   60421 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 20:59:17.362450   60421 start.go:495] detecting cgroup driver to use...
	I1216 20:59:17.362526   60421 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 20:59:17.379285   60421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 20:59:17.394451   60421 docker.go:217] disabling cri-docker service (if available) ...
	I1216 20:59:17.394514   60421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 20:59:17.411856   60421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 20:59:17.428028   60421 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 20:59:17.557602   60421 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 20:59:17.699140   60421 docker.go:233] disabling docker service ...
	I1216 20:59:17.699215   60421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 20:59:17.715236   60421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 20:59:17.729268   60421 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 20:59:17.875729   60421 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 20:59:18.007569   60421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 20:59:18.022940   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 20:59:18.042227   60421 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1216 20:59:18.042292   60421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:18.053011   60421 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 20:59:18.053081   60421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:18.063767   60421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:18.074262   60421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:18.085372   60421 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 20:59:18.098366   60421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:18.113619   60421 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:18.134081   60421 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:18.145276   60421 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 20:59:18.155733   60421 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 20:59:18.155806   60421 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 20:59:18.170492   60421 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 20:59:18.182276   60421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:59:18.291278   60421 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 20:59:18.384618   60421 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 20:59:18.384700   60421 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 20:59:18.390755   60421 start.go:563] Will wait 60s for crictl version
	I1216 20:59:18.390823   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:18.395435   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 20:59:18.439300   60421 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 20:59:18.439390   60421 ssh_runner.go:195] Run: crio --version
	I1216 20:59:18.473976   60421 ssh_runner.go:195] Run: crio --version
	I1216 20:59:18.505262   60421 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1216 20:59:17.086569   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Start
	I1216 20:59:17.086752   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Ensuring networks are active...
	I1216 20:59:17.087656   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Ensuring network default is active
	I1216 20:59:17.088082   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Ensuring network mk-default-k8s-diff-port-327790 is active
	I1216 20:59:17.088482   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Getting domain xml...
	I1216 20:59:17.089219   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Creating domain...
	I1216 20:59:18.413245   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting to get IP...
	I1216 20:59:18.414327   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:18.414794   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:18.414907   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:18.414784   61807 retry.go:31] will retry after 229.952775ms: waiting for machine to come up
	I1216 20:59:18.646270   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:18.646677   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:18.646727   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:18.646654   61807 retry.go:31] will retry after 341.342128ms: waiting for machine to come up
	I1216 20:59:18.989285   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:18.989781   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:18.989809   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:18.989740   61807 retry.go:31] will retry after 311.937657ms: waiting for machine to come up
	I1216 20:59:19.303619   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:19.304189   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:19.304221   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:19.304131   61807 retry.go:31] will retry after 515.638431ms: waiting for machine to come up
	I1216 20:59:19.821478   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:19.821955   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:19.821997   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:19.821900   61807 retry.go:31] will retry after 590.835789ms: waiting for machine to come up
	I1216 20:59:18.506840   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetIP
	I1216 20:59:18.510260   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:18.510654   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:18.510689   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:18.510875   60421 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1216 20:59:18.515632   60421 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 20:59:18.529943   60421 kubeadm.go:883] updating cluster {Name:no-preload-232338 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.32.0 ClusterName:no-preload-232338 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.240 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 20:59:18.530128   60421 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 20:59:18.530184   60421 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 20:59:18.569526   60421 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1216 20:59:18.569555   60421 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.32.0 registry.k8s.io/kube-controller-manager:v1.32.0 registry.k8s.io/kube-scheduler:v1.32.0 registry.k8s.io/kube-proxy:v1.32.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.16-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1216 20:59:18.569650   60421 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:59:18.569669   60421 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.16-0
	I1216 20:59:18.569688   60421 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1216 20:59:18.569651   60421 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.32.0
	I1216 20:59:18.569774   60421 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.32.0
	I1216 20:59:18.569859   60421 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 20:59:18.569859   60421 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1216 20:59:18.570294   60421 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.32.0
	I1216 20:59:18.571577   60421 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.32.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.32.0
	I1216 20:59:18.571602   60421 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.16-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.16-0
	I1216 20:59:18.571582   60421 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:59:18.571585   60421 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.32.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.32.0
	I1216 20:59:18.571583   60421 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1216 20:59:18.571580   60421 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.32.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 20:59:18.571828   60421 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.32.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.32.0
	I1216 20:59:18.571953   60421 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1216 20:59:18.781052   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.32.0
	I1216 20:59:18.783569   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.16-0
	I1216 20:59:18.795901   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 20:59:18.799273   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1216 20:59:18.801098   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.32.0
	I1216 20:59:18.802163   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1216 20:59:18.828334   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.32.0
	I1216 20:59:18.897880   60421 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.32.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.32.0" does not exist at hash "a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5" in container runtime
	I1216 20:59:18.897942   60421 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.32.0
	I1216 20:59:18.898003   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:18.910616   60421 cache_images.go:116] "registry.k8s.io/etcd:3.5.16-0" needs transfer: "registry.k8s.io/etcd:3.5.16-0" does not exist at hash "a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc" in container runtime
	I1216 20:59:18.910665   60421 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.16-0
	I1216 20:59:18.910713   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:18.937699   60421 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.32.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.32.0" does not exist at hash "8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3" in container runtime
	I1216 20:59:18.937753   60421 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 20:59:18.937804   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:18.979455   60421 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.32.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.32.0" does not exist at hash "c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4" in container runtime
	I1216 20:59:18.979500   60421 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.32.0
	I1216 20:59:18.979540   60421 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1216 20:59:18.979555   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:18.979586   60421 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1216 20:59:18.979636   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:19.002472   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:59:19.076177   60421 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.32.0" needs transfer: "registry.k8s.io/kube-proxy:v1.32.0" does not exist at hash "040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08" in container runtime
	I1216 20:59:19.076217   60421 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.32.0
	I1216 20:59:19.076237   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.0
	I1216 20:59:19.076252   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:19.076292   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I1216 20:59:19.076351   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 20:59:19.076408   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1216 20:59:19.076487   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.0
	I1216 20:59:19.076511   60421 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1216 20:59:19.076536   60421 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:59:19.076580   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:19.204766   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:59:19.204846   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1216 20:59:19.204904   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.0
	I1216 20:59:19.204959   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.0
	I1216 20:59:19.205097   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.0
	I1216 20:59:19.205212   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I1216 20:59:19.205285   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 20:59:19.365421   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.0
	I1216 20:59:19.365466   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:59:19.365512   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1216 20:59:19.365620   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.0
	I1216 20:59:19.365652   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.0
	I1216 20:59:19.365771   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 20:59:19.365861   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I1216 20:59:19.539614   60421 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1216 20:59:19.539729   60421 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1216 20:59:19.539740   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.0
	I1216 20:59:19.539740   60421 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.0
	I1216 20:59:19.539817   60421 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.0
	I1216 20:59:19.539839   60421 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.32.0
	I1216 20:59:19.539840   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:59:19.539885   60421 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.32.0
	I1216 20:59:19.539949   60421 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.0
	I1216 20:59:19.540000   60421 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0
	I1216 20:59:19.540029   60421 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.32.0
	I1216 20:59:19.540062   60421 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.16-0
	I1216 20:59:19.555043   60421 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.32.0 (exists)
	I1216 20:59:19.555076   60421 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.32.0
	I1216 20:59:19.555135   60421 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.32.0
	I1216 20:59:19.555251   60421 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1216 20:59:19.630857   60421 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.16-0 (exists)
	I1216 20:59:19.630949   60421 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1216 20:59:19.630983   60421 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.0
	I1216 20:59:19.631030   60421 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.32.0 (exists)
	I1216 20:59:19.631065   60421 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.32.0
	I1216 20:59:19.631104   60421 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.32.0 (exists)
	I1216 20:59:19.631069   60421 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1216 20:59:21.838285   60421 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.32.0: (2.283119694s)
	I1216 20:59:21.838328   60421 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.0 from cache
	I1216 20:59:21.838359   60421 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1216 20:59:21.838394   60421 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.20725659s)
	I1216 20:59:21.838414   60421 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1216 20:59:21.838421   60421 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1216 20:59:21.838361   60421 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.32.0: (2.207274997s)
	I1216 20:59:21.838471   60421 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.32.0 (exists)
	I1216 20:59:20.414932   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:20.415565   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:20.415597   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:20.415502   61807 retry.go:31] will retry after 698.152518ms: waiting for machine to come up
	I1216 20:59:21.115103   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:21.115597   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:21.115627   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:21.115543   61807 retry.go:31] will retry after 891.02308ms: waiting for machine to come up
	I1216 20:59:22.008636   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:22.009070   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:22.009098   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:22.009015   61807 retry.go:31] will retry after 923.634312ms: waiting for machine to come up
	I1216 20:59:22.934238   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:22.934753   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:22.934784   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:22.934697   61807 retry.go:31] will retry after 1.142718367s: waiting for machine to come up
	I1216 20:59:24.078935   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:24.079398   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:24.079429   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:24.079363   61807 retry.go:31] will retry after 1.541033224s: waiting for machine to come up
	I1216 20:59:23.901058   60421 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.062611423s)
	I1216 20:59:23.901091   60421 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1216 20:59:23.901122   60421 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.16-0
	I1216 20:59:23.901169   60421 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.16-0
	I1216 20:59:25.621932   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:25.622401   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:25.622433   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:25.622364   61807 retry.go:31] will retry after 2.600280234s: waiting for machine to come up
	I1216 20:59:28.224296   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:28.224874   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:28.224892   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:28.224828   61807 retry.go:31] will retry after 3.308841216s: waiting for machine to come up
	I1216 20:59:27.793238   60421 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.16-0: (3.892042799s)
	I1216 20:59:27.793280   60421 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 from cache
	I1216 20:59:27.793321   60421 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.32.0
	I1216 20:59:27.793420   60421 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.32.0
	I1216 20:59:29.552069   60421 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.32.0: (1.758623471s)
	I1216 20:59:29.552102   60421 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.0 from cache
	I1216 20:59:29.552130   60421 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.32.0
	I1216 20:59:29.552177   60421 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.32.0
	I1216 20:59:31.708930   60421 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.32.0: (2.156719559s)
	I1216 20:59:31.708971   60421 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.0 from cache
	I1216 20:59:31.709008   60421 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1216 20:59:31.709057   60421 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1216 20:59:32.660657   60421 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1216 20:59:32.660713   60421 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.32.0
	I1216 20:59:32.660775   60421 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.32.0
	I1216 20:59:31.537153   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:31.537735   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:31.537795   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:31.537710   61807 retry.go:31] will retry after 4.259700282s: waiting for machine to come up
	I1216 20:59:37.140408   60933 start.go:364] duration metric: took 4m2.637362394s to acquireMachinesLock for "old-k8s-version-847766"
	I1216 20:59:37.140483   60933 start.go:96] Skipping create...Using existing machine configuration
	I1216 20:59:37.140491   60933 fix.go:54] fixHost starting: 
	I1216 20:59:37.140933   60933 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:59:37.140988   60933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:59:37.159075   60933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39873
	I1216 20:59:37.159574   60933 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:59:37.160140   60933 main.go:141] libmachine: Using API Version  1
	I1216 20:59:37.160172   60933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:59:37.160560   60933 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:59:37.160773   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:37.160889   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetState
	I1216 20:59:37.162561   60933 fix.go:112] recreateIfNeeded on old-k8s-version-847766: state=Stopped err=<nil>
	I1216 20:59:37.162603   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	W1216 20:59:37.162755   60933 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 20:59:37.166031   60933 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-847766" ...
	I1216 20:59:34.634064   60421 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.32.0: (1.973261206s)
	I1216 20:59:34.634117   60421 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.0 from cache
	I1216 20:59:34.634154   60421 cache_images.go:123] Successfully loaded all cached images
	I1216 20:59:34.634160   60421 cache_images.go:92] duration metric: took 16.064590407s to LoadCachedImages
	I1216 20:59:34.634171   60421 kubeadm.go:934] updating node { 192.168.50.240 8443 v1.32.0 crio true true} ...
	I1216 20:59:34.634331   60421 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-232338 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:no-preload-232338 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 20:59:34.634420   60421 ssh_runner.go:195] Run: crio config
	I1216 20:59:34.688034   60421 cni.go:84] Creating CNI manager for ""
	I1216 20:59:34.688059   60421 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 20:59:34.688068   60421 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 20:59:34.688093   60421 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.240 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-232338 NodeName:no-preload-232338 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 20:59:34.688277   60421 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-232338"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.240"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.240"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 20:59:34.688356   60421 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1216 20:59:34.699709   60421 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 20:59:34.699784   60421 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 20:59:34.710306   60421 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1216 20:59:34.732401   60421 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 20:59:34.757561   60421 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1216 20:59:34.776094   60421 ssh_runner.go:195] Run: grep 192.168.50.240	control-plane.minikube.internal$ /etc/hosts
	I1216 20:59:34.780341   60421 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.240	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 20:59:34.794025   60421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:59:34.930543   60421 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 20:59:34.948720   60421 certs.go:68] Setting up /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338 for IP: 192.168.50.240
	I1216 20:59:34.948752   60421 certs.go:194] generating shared ca certs ...
	I1216 20:59:34.948776   60421 certs.go:226] acquiring lock for ca certs: {Name:mk7f8f83a04be3d39897a025f51d4d8228b5a509 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:59:34.949035   60421 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key
	I1216 20:59:34.949094   60421 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key
	I1216 20:59:34.949115   60421 certs.go:256] generating profile certs ...
	I1216 20:59:34.949243   60421 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/client.key
	I1216 20:59:34.949327   60421 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/apiserver.key.674e04e3
	I1216 20:59:34.949379   60421 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/proxy-client.key
	I1216 20:59:34.949509   60421 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem (1338 bytes)
	W1216 20:59:34.949547   60421 certs.go:480] ignoring /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254_empty.pem, impossibly tiny 0 bytes
	I1216 20:59:34.949557   60421 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 20:59:34.949582   60421 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem (1082 bytes)
	I1216 20:59:34.949604   60421 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem (1123 bytes)
	I1216 20:59:34.949627   60421 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem (1679 bytes)
	I1216 20:59:34.949662   60421 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem (1708 bytes)
	I1216 20:59:34.950648   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 20:59:34.994491   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 20:59:35.029853   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 20:59:35.058834   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 20:59:35.096870   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 20:59:35.126467   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 20:59:35.160826   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 20:59:35.186344   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 20:59:35.211125   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /usr/share/ca-certificates/142542.pem (1708 bytes)
	I1216 20:59:35.238705   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 20:59:35.266485   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem --> /usr/share/ca-certificates/14254.pem (1338 bytes)
	I1216 20:59:35.291729   60421 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 20:59:35.311939   60421 ssh_runner.go:195] Run: openssl version
	I1216 20:59:35.318397   60421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142542.pem && ln -fs /usr/share/ca-certificates/142542.pem /etc/ssl/certs/142542.pem"
	I1216 20:59:35.332081   60421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142542.pem
	I1216 20:59:35.336967   60421 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 19:42 /usr/share/ca-certificates/142542.pem
	I1216 20:59:35.337022   60421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142542.pem
	I1216 20:59:35.343307   60421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142542.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 20:59:35.356515   60421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 20:59:35.370380   60421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:59:35.375538   60421 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:59:35.375589   60421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:59:35.381736   60421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 20:59:35.395677   60421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14254.pem && ln -fs /usr/share/ca-certificates/14254.pem /etc/ssl/certs/14254.pem"
	I1216 20:59:35.409029   60421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14254.pem
	I1216 20:59:35.414358   60421 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 19:42 /usr/share/ca-certificates/14254.pem
	I1216 20:59:35.414427   60421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14254.pem
	I1216 20:59:35.421352   60421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14254.pem /etc/ssl/certs/51391683.0"
	I1216 20:59:35.435322   60421 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 20:59:35.440479   60421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 20:59:35.447408   60421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 20:59:35.453992   60421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 20:59:35.460713   60421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 20:59:35.467109   60421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 20:59:35.473412   60421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 20:59:35.479720   60421 kubeadm.go:392] StartCluster: {Name:no-preload-232338 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32
.0 ClusterName:no-preload-232338 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.240 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 20:59:35.479824   60421 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 20:59:35.479901   60421 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 20:59:35.521238   60421 cri.go:89] found id: ""
	I1216 20:59:35.521331   60421 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 20:59:35.534818   60421 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1216 20:59:35.534848   60421 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1216 20:59:35.534893   60421 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 20:59:35.547460   60421 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 20:59:35.548501   60421 kubeconfig.go:125] found "no-preload-232338" server: "https://192.168.50.240:8443"
	I1216 20:59:35.550575   60421 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 20:59:35.560957   60421 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.240
	I1216 20:59:35.561018   60421 kubeadm.go:1160] stopping kube-system containers ...
	I1216 20:59:35.561033   60421 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 20:59:35.561094   60421 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 20:59:35.598970   60421 cri.go:89] found id: ""
	I1216 20:59:35.599082   60421 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 20:59:35.618027   60421 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 20:59:35.629418   60421 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 20:59:35.629455   60421 kubeadm.go:157] found existing configuration files:
	
	I1216 20:59:35.629501   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 20:59:35.639825   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 20:59:35.639896   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 20:59:35.650676   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 20:59:35.662171   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 20:59:35.662228   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 20:59:35.674780   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 20:59:35.686565   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 20:59:35.686640   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 20:59:35.698956   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 20:59:35.710813   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 20:59:35.710874   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 20:59:35.723307   60421 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 20:59:35.734712   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:35.863375   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:37.021512   60421 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.158099337s)
	I1216 20:59:37.021546   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:37.269641   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:37.348978   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:37.428210   60421 api_server.go:52] waiting for apiserver process to appear ...
	I1216 20:59:37.428296   60421 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:59:35.800344   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.800861   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Found IP for machine: 192.168.39.162
	I1216 20:59:35.800889   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has current primary IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.800899   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Reserving static IP address...
	I1216 20:59:35.801367   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-327790", mac: "52:54:00:68:47:d5", ip: "192.168.39.162"} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:35.801395   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Reserved static IP address: 192.168.39.162
	I1216 20:59:35.801419   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | skip adding static IP to network mk-default-k8s-diff-port-327790 - found existing host DHCP lease matching {name: "default-k8s-diff-port-327790", mac: "52:54:00:68:47:d5", ip: "192.168.39.162"}
	I1216 20:59:35.801439   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for SSH to be available...
	I1216 20:59:35.801452   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Getting to WaitForSSH function...
	I1216 20:59:35.803875   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.804226   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:35.804257   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.804407   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Using SSH client type: external
	I1216 20:59:35.804439   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Using SSH private key: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa (-rw-------)
	I1216 20:59:35.804472   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.162 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1216 20:59:35.804493   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | About to run SSH command:
	I1216 20:59:35.804517   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | exit 0
	I1216 20:59:35.935325   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | SSH cmd err, output: <nil>: 
	I1216 20:59:35.935765   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetConfigRaw
	I1216 20:59:35.936442   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetIP
	I1216 20:59:35.938945   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.939369   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:35.939395   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.939654   60829 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/config.json ...
	I1216 20:59:35.939915   60829 machine.go:93] provisionDockerMachine start ...
	I1216 20:59:35.939938   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:59:35.940183   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:35.942412   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.942758   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:35.942787   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.942885   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:35.943067   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:35.943205   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:35.943347   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:35.943501   60829 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:35.943687   60829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1216 20:59:35.943697   60829 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 20:59:36.060257   60829 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1216 20:59:36.060297   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetMachineName
	I1216 20:59:36.060608   60829 buildroot.go:166] provisioning hostname "default-k8s-diff-port-327790"
	I1216 20:59:36.060634   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetMachineName
	I1216 20:59:36.060853   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:36.063758   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.064060   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:36.064097   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.064222   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:36.064427   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.064600   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.064745   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:36.064910   60829 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:36.065132   60829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1216 20:59:36.065151   60829 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-327790 && echo "default-k8s-diff-port-327790" | sudo tee /etc/hostname
	I1216 20:59:36.194522   60829 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-327790
	
	I1216 20:59:36.194555   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:36.197422   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.197770   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:36.197818   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.198007   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:36.198217   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.198446   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.198606   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:36.198803   60829 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:36.199037   60829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1216 20:59:36.199062   60829 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-327790' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-327790/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-327790' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 20:59:36.320779   60829 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 20:59:36.320808   60829 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20091-7083/.minikube CaCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20091-7083/.minikube}
	I1216 20:59:36.320833   60829 buildroot.go:174] setting up certificates
	I1216 20:59:36.320845   60829 provision.go:84] configureAuth start
	I1216 20:59:36.320854   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetMachineName
	I1216 20:59:36.321171   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetIP
	I1216 20:59:36.323701   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.324019   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:36.324044   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.324254   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:36.326002   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.326317   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:36.326348   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.326478   60829 provision.go:143] copyHostCerts
	I1216 20:59:36.326555   60829 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem, removing ...
	I1216 20:59:36.326567   60829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem
	I1216 20:59:36.326635   60829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem (1082 bytes)
	I1216 20:59:36.326747   60829 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem, removing ...
	I1216 20:59:36.326759   60829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem
	I1216 20:59:36.326786   60829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem (1123 bytes)
	I1216 20:59:36.326856   60829 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem, removing ...
	I1216 20:59:36.326866   60829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem
	I1216 20:59:36.326887   60829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem (1679 bytes)
	I1216 20:59:36.326949   60829 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-327790 san=[127.0.0.1 192.168.39.162 default-k8s-diff-port-327790 localhost minikube]
	I1216 20:59:36.480215   60829 provision.go:177] copyRemoteCerts
	I1216 20:59:36.480278   60829 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 20:59:36.480304   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:36.482859   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.483213   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:36.483258   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.483500   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:36.483712   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.483903   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:36.484087   60829 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 20:59:36.571252   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 20:59:36.599399   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1216 20:59:36.624194   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 20:59:36.649294   60829 provision.go:87] duration metric: took 328.437433ms to configureAuth
	I1216 20:59:36.649325   60829 buildroot.go:189] setting minikube options for container-runtime
	I1216 20:59:36.649494   60829 config.go:182] Loaded profile config "default-k8s-diff-port-327790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 20:59:36.649567   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:36.652411   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.652838   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:36.652868   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.653006   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:36.653264   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.653490   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.653704   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:36.653879   60829 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:36.654059   60829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1216 20:59:36.654076   60829 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 20:59:36.893006   60829 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 20:59:36.893043   60829 machine.go:96] duration metric: took 953.113126ms to provisionDockerMachine
	I1216 20:59:36.893057   60829 start.go:293] postStartSetup for "default-k8s-diff-port-327790" (driver="kvm2")
	I1216 20:59:36.893070   60829 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 20:59:36.893101   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:59:36.893466   60829 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 20:59:36.893494   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:36.896151   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.896531   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:36.896561   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.896683   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:36.896893   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.897100   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:36.897280   60829 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 20:59:36.982077   60829 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 20:59:36.986598   60829 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 20:59:36.986624   60829 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/addons for local assets ...
	I1216 20:59:36.986702   60829 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/files for local assets ...
	I1216 20:59:36.986795   60829 filesync.go:149] local asset: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem -> 142542.pem in /etc/ssl/certs
	I1216 20:59:36.986919   60829 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 20:59:36.996453   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /etc/ssl/certs/142542.pem (1708 bytes)
	I1216 20:59:37.021838   60829 start.go:296] duration metric: took 128.770799ms for postStartSetup
	I1216 20:59:37.021873   60829 fix.go:56] duration metric: took 19.961410312s for fixHost
	I1216 20:59:37.021896   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:37.024668   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.025171   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:37.025207   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.025369   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:37.025591   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:37.025746   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:37.025892   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:37.026040   60829 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:37.026257   60829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1216 20:59:37.026273   60829 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 20:59:37.140228   60829 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734382777.110726967
	
	I1216 20:59:37.140254   60829 fix.go:216] guest clock: 1734382777.110726967
	I1216 20:59:37.140264   60829 fix.go:229] Guest: 2024-12-16 20:59:37.110726967 +0000 UTC Remote: 2024-12-16 20:59:37.021877328 +0000 UTC m=+246.706572335 (delta=88.849639ms)
	I1216 20:59:37.140308   60829 fix.go:200] guest clock delta is within tolerance: 88.849639ms
	I1216 20:59:37.140315   60829 start.go:83] releasing machines lock for "default-k8s-diff-port-327790", held for 20.079880217s
	I1216 20:59:37.140347   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:59:37.140632   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetIP
	I1216 20:59:37.143268   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.143748   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:37.143775   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.143983   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:59:37.144601   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:59:37.144789   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:59:37.144883   60829 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 20:59:37.144930   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:37.145028   60829 ssh_runner.go:195] Run: cat /version.json
	I1216 20:59:37.145060   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:37.147817   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.148192   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:37.148219   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.148315   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.148364   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:37.148576   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:37.148755   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:37.148776   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.148804   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:37.148964   60829 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 20:59:37.149020   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:37.149141   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:37.149285   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:37.149439   60829 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 20:59:37.232354   60829 ssh_runner.go:195] Run: systemctl --version
	I1216 20:59:37.261803   60829 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 20:59:37.416094   60829 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 20:59:37.425458   60829 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 20:59:37.425566   60829 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 20:59:37.448873   60829 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 20:59:37.448914   60829 start.go:495] detecting cgroup driver to use...
	I1216 20:59:37.449014   60829 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 20:59:37.472474   60829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 20:59:37.492445   60829 docker.go:217] disabling cri-docker service (if available) ...
	I1216 20:59:37.492518   60829 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 20:59:37.510478   60829 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 20:59:37.525452   60829 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 20:59:37.642105   60829 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 20:59:37.814506   60829 docker.go:233] disabling docker service ...
	I1216 20:59:37.814590   60829 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 20:59:37.829046   60829 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 20:59:37.845049   60829 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 20:59:38.009931   60829 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 20:59:38.158000   60829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 20:59:38.174376   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 20:59:38.197489   60829 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1216 20:59:38.197555   60829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:38.213974   60829 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 20:59:38.214034   60829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:38.230383   60829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:38.244599   60829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:38.257574   60829 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 20:59:38.273377   60829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:38.285854   60829 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:38.312687   60829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:38.329105   60829 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 20:59:38.343596   60829 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 20:59:38.343679   60829 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 20:59:38.362530   60829 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 20:59:38.374384   60829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:59:38.564793   60829 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 20:59:38.682792   60829 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 20:59:38.682873   60829 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 20:59:38.689164   60829 start.go:563] Will wait 60s for crictl version
	I1216 20:59:38.689251   60829 ssh_runner.go:195] Run: which crictl
	I1216 20:59:38.693994   60829 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 20:59:38.746808   60829 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 20:59:38.746913   60829 ssh_runner.go:195] Run: crio --version
	I1216 20:59:38.788490   60829 ssh_runner.go:195] Run: crio --version
	I1216 20:59:38.823957   60829 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1216 20:59:37.167470   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .Start
	I1216 20:59:37.167715   60933 main.go:141] libmachine: (old-k8s-version-847766) Ensuring networks are active...
	I1216 20:59:37.168626   60933 main.go:141] libmachine: (old-k8s-version-847766) Ensuring network default is active
	I1216 20:59:37.169114   60933 main.go:141] libmachine: (old-k8s-version-847766) Ensuring network mk-old-k8s-version-847766 is active
	I1216 20:59:37.169670   60933 main.go:141] libmachine: (old-k8s-version-847766) Getting domain xml...
	I1216 20:59:37.170345   60933 main.go:141] libmachine: (old-k8s-version-847766) Creating domain...
	I1216 20:59:38.535579   60933 main.go:141] libmachine: (old-k8s-version-847766) Waiting to get IP...
	I1216 20:59:38.536661   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:38.537089   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:38.537174   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:38.537078   61973 retry.go:31] will retry after 277.62307ms: waiting for machine to come up
	I1216 20:59:38.816788   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:38.817329   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:38.817360   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:38.817272   61973 retry.go:31] will retry after 346.694382ms: waiting for machine to come up
	I1216 20:59:39.165778   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:39.166377   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:39.166436   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:39.166355   61973 retry.go:31] will retry after 416.599295ms: waiting for machine to come up
	I1216 20:59:38.825413   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetIP
	I1216 20:59:38.828442   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:38.828836   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:38.828870   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:38.829125   60829 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1216 20:59:38.833715   60829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 20:59:38.848989   60829 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-327790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.32.0 ClusterName:default-k8s-diff-port-327790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 20:59:38.849121   60829 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 20:59:38.849169   60829 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 20:59:38.891356   60829 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1216 20:59:38.891432   60829 ssh_runner.go:195] Run: which lz4
	I1216 20:59:38.896669   60829 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 20:59:38.901209   60829 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 20:59:38.901253   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1216 20:59:37.928929   60421 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:59:38.428939   60421 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:59:38.454184   60421 api_server.go:72] duration metric: took 1.02597754s to wait for apiserver process to appear ...
	I1216 20:59:38.454211   60421 api_server.go:88] waiting for apiserver healthz status ...
	I1216 20:59:38.454252   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:38.454842   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": dial tcp 192.168.50.240:8443: connect: connection refused
	I1216 20:59:38.954378   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:39.585259   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:39.585762   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:39.585791   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:39.585737   61973 retry.go:31] will retry after 526.969594ms: waiting for machine to come up
	I1216 20:59:40.114653   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:40.115175   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:40.115205   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:40.115140   61973 retry.go:31] will retry after 502.283372ms: waiting for machine to come up
	I1216 20:59:40.619067   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:40.619633   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:40.619682   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:40.619571   61973 retry.go:31] will retry after 764.799982ms: waiting for machine to come up
	I1216 20:59:41.385515   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:41.386066   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:41.386100   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:41.386027   61973 retry.go:31] will retry after 982.237202ms: waiting for machine to come up
	I1216 20:59:42.369934   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:42.370414   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:42.370449   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:42.370373   61973 retry.go:31] will retry after 1.163280736s: waiting for machine to come up
	I1216 20:59:43.534829   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:43.535194   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:43.535224   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:43.535143   61973 retry.go:31] will retry after 1.630958514s: waiting for machine to come up
	I1216 20:59:40.539994   60829 crio.go:462] duration metric: took 1.643361409s to copy over tarball
	I1216 20:59:40.540066   60829 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 20:59:42.840346   60829 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.30025199s)
	I1216 20:59:42.840382   60829 crio.go:469] duration metric: took 2.300357568s to extract the tarball
	I1216 20:59:42.840392   60829 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 20:59:42.881650   60829 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 20:59:42.928089   60829 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 20:59:42.928120   60829 cache_images.go:84] Images are preloaded, skipping loading
	I1216 20:59:42.928129   60829 kubeadm.go:934] updating node { 192.168.39.162 8444 v1.32.0 crio true true} ...
	I1216 20:59:42.928222   60829 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-327790 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:default-k8s-diff-port-327790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 20:59:42.928286   60829 ssh_runner.go:195] Run: crio config
	I1216 20:59:42.983315   60829 cni.go:84] Creating CNI manager for ""
	I1216 20:59:42.983348   60829 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 20:59:42.983360   60829 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 20:59:42.983396   60829 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.162 APIServerPort:8444 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-327790 NodeName:default-k8s-diff-port-327790 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.162"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.162 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 20:59:42.983556   60829 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.162
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-327790"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.162"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.162"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 20:59:42.983631   60829 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1216 20:59:42.996192   60829 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 20:59:42.996283   60829 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 20:59:43.008389   60829 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1216 20:59:43.027984   60829 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 20:59:43.045672   60829 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1216 20:59:43.063620   60829 ssh_runner.go:195] Run: grep 192.168.39.162	control-plane.minikube.internal$ /etc/hosts
	I1216 20:59:43.067925   60829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.162	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 20:59:43.082946   60829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:59:43.220929   60829 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 20:59:43.243843   60829 certs.go:68] Setting up /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790 for IP: 192.168.39.162
	I1216 20:59:43.243870   60829 certs.go:194] generating shared ca certs ...
	I1216 20:59:43.243888   60829 certs.go:226] acquiring lock for ca certs: {Name:mk7f8f83a04be3d39897a025f51d4d8228b5a509 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:59:43.244125   60829 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key
	I1216 20:59:43.244185   60829 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key
	I1216 20:59:43.244200   60829 certs.go:256] generating profile certs ...
	I1216 20:59:43.244324   60829 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/client.key
	I1216 20:59:43.244400   60829 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/apiserver.key.0f0bf709
	I1216 20:59:43.244449   60829 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/proxy-client.key
	I1216 20:59:43.244606   60829 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem (1338 bytes)
	W1216 20:59:43.244649   60829 certs.go:480] ignoring /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254_empty.pem, impossibly tiny 0 bytes
	I1216 20:59:43.244666   60829 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 20:59:43.244689   60829 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem (1082 bytes)
	I1216 20:59:43.244711   60829 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem (1123 bytes)
	I1216 20:59:43.244731   60829 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem (1679 bytes)
	I1216 20:59:43.244776   60829 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem (1708 bytes)
	I1216 20:59:43.245449   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 20:59:43.283598   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 20:59:43.309321   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 20:59:43.343071   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 20:59:43.379763   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1216 20:59:43.409794   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 20:59:43.437074   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 20:59:43.462616   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 20:59:43.487711   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 20:59:43.512636   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem --> /usr/share/ca-certificates/14254.pem (1338 bytes)
	I1216 20:59:43.539050   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /usr/share/ca-certificates/142542.pem (1708 bytes)
	I1216 20:59:43.566507   60829 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 20:59:43.584425   60829 ssh_runner.go:195] Run: openssl version
	I1216 20:59:43.590996   60829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 20:59:43.604384   60829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:59:43.609342   60829 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:59:43.609404   60829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:59:43.615902   60829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 20:59:43.627432   60829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14254.pem && ln -fs /usr/share/ca-certificates/14254.pem /etc/ssl/certs/14254.pem"
	I1216 20:59:43.638929   60829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14254.pem
	I1216 20:59:43.644189   60829 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 19:42 /usr/share/ca-certificates/14254.pem
	I1216 20:59:43.644267   60829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14254.pem
	I1216 20:59:43.650550   60829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14254.pem /etc/ssl/certs/51391683.0"
	I1216 20:59:43.662678   60829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142542.pem && ln -fs /usr/share/ca-certificates/142542.pem /etc/ssl/certs/142542.pem"
	I1216 20:59:43.674981   60829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142542.pem
	I1216 20:59:43.680022   60829 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 19:42 /usr/share/ca-certificates/142542.pem
	I1216 20:59:43.680113   60829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142542.pem
	I1216 20:59:43.686159   60829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142542.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 20:59:43.697897   60829 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 20:59:43.702835   60829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 20:59:43.709262   60829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 20:59:43.716370   60829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 20:59:43.725031   60829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 20:59:43.732876   60829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 20:59:43.739810   60829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 20:59:43.746998   60829 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-327790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.32.0 ClusterName:default-k8s-diff-port-327790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 20:59:43.747131   60829 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 20:59:43.747189   60829 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 20:59:43.791895   60829 cri.go:89] found id: ""
	I1216 20:59:43.791979   60829 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 20:59:43.802858   60829 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1216 20:59:43.802886   60829 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1216 20:59:43.802943   60829 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 20:59:43.813313   60829 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 20:59:43.814296   60829 kubeconfig.go:125] found "default-k8s-diff-port-327790" server: "https://192.168.39.162:8444"
	I1216 20:59:43.816374   60829 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 20:59:43.825834   60829 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.162
	I1216 20:59:43.825871   60829 kubeadm.go:1160] stopping kube-system containers ...
	I1216 20:59:43.825884   60829 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 20:59:43.825934   60829 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 20:59:43.870890   60829 cri.go:89] found id: ""
	I1216 20:59:43.870965   60829 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 20:59:43.888155   60829 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 20:59:43.898356   60829 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 20:59:43.898381   60829 kubeadm.go:157] found existing configuration files:
	
	I1216 20:59:43.898445   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1216 20:59:43.908232   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 20:59:43.908310   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 20:59:43.918637   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1216 20:59:43.928255   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 20:59:43.928343   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 20:59:43.938479   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1216 20:59:43.948085   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 20:59:43.948157   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 20:59:43.959080   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1216 20:59:43.969218   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 20:59:43.969275   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 20:59:43.980063   60829 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 20:59:43.990768   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:44.125741   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:44.845177   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:45.049512   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:45.162055   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:45.284927   60829 api_server.go:52] waiting for apiserver process to appear ...
	I1216 20:59:45.285036   60829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:59:43.954985   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 20:59:43.955087   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:45.168144   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:45.168719   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:45.168750   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:45.168671   61973 retry.go:31] will retry after 1.835631107s: waiting for machine to come up
	I1216 20:59:47.005854   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:47.006380   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:47.006422   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:47.006339   61973 retry.go:31] will retry after 1.943800898s: waiting for machine to come up
	I1216 20:59:48.951552   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:48.952050   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:48.952114   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:48.952008   61973 retry.go:31] will retry after 2.949898251s: waiting for machine to come up
	I1216 20:59:45.785964   60829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:59:46.285989   60829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:59:46.339555   60829 api_server.go:72] duration metric: took 1.054628295s to wait for apiserver process to appear ...
	I1216 20:59:46.339597   60829 api_server.go:88] waiting for apiserver healthz status ...
	I1216 20:59:46.339636   60829 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1216 20:59:46.340197   60829 api_server.go:269] stopped: https://192.168.39.162:8444/healthz: Get "https://192.168.39.162:8444/healthz": dial tcp 192.168.39.162:8444: connect: connection refused
	I1216 20:59:46.839771   60829 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1216 20:59:49.461907   60829 api_server.go:279] https://192.168.39.162:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 20:59:49.461943   60829 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 20:59:49.461958   60829 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1216 20:59:49.513069   60829 api_server.go:279] https://192.168.39.162:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 20:59:49.513121   60829 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 20:59:49.840517   60829 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1216 20:59:49.846051   60829 api_server.go:279] https://192.168.39.162:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 20:59:49.846086   60829 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 20:59:50.339824   60829 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1216 20:59:50.347663   60829 api_server.go:279] https://192.168.39.162:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 20:59:50.347708   60829 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 20:59:50.840385   60829 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1216 20:59:50.844943   60829 api_server.go:279] https://192.168.39.162:8444/healthz returned 200:
	ok
	I1216 20:59:50.854518   60829 api_server.go:141] control plane version: v1.32.0
	I1216 20:59:50.854546   60829 api_server.go:131] duration metric: took 4.514941385s to wait for apiserver health ...
	I1216 20:59:50.854554   60829 cni.go:84] Creating CNI manager for ""
	I1216 20:59:50.854560   60829 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 20:59:50.856538   60829 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 20:59:48.956352   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 20:59:48.956414   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:51.905108   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:51.905560   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:51.905594   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:51.905505   61973 retry.go:31] will retry after 3.44069953s: waiting for machine to come up
	I1216 20:59:50.858169   60829 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 20:59:50.882809   60829 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 20:59:50.912787   60829 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 20:59:50.933650   60829 system_pods.go:59] 8 kube-system pods found
	I1216 20:59:50.933693   60829 system_pods.go:61] "coredns-668d6bf9bc-tqh9s" [56b4db37-b6bc-49eb-b45f-b8b4d1f16eed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 20:59:50.933705   60829 system_pods.go:61] "etcd-default-k8s-diff-port-327790" [067f7c41-3763-42d3-af06-ad50fad3d206] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 20:59:50.933713   60829 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-327790" [f1964b5b-9d2b-4f82-afc6-2f359c9b8827] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 20:59:50.933722   60829 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-327790" [fd7479e3-be26-4bb0-b53a-e40766a33996] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 20:59:50.933742   60829 system_pods.go:61] "kube-proxy-mplxr" [027abdc5-7022-4528-a93f-36f3b10115ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 20:59:50.933751   60829 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-327790" [d7416a53-ccb4-46fd-9992-46cbf7ec0a3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 20:59:50.933763   60829 system_pods.go:61] "metrics-server-f79f97bbb-hlt7s" [d42906e3-387c-493e-9d06-5bb654dc9784] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 20:59:50.933772   60829 system_pods.go:61] "storage-provisioner" [c774635a-faca-4a1a-8f4e-2161447ebaa1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 20:59:50.933785   60829 system_pods.go:74] duration metric: took 20.968988ms to wait for pod list to return data ...
	I1216 20:59:50.933804   60829 node_conditions.go:102] verifying NodePressure condition ...
	I1216 20:59:50.937958   60829 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 20:59:50.937986   60829 node_conditions.go:123] node cpu capacity is 2
	I1216 20:59:50.938008   60829 node_conditions.go:105] duration metric: took 4.196302ms to run NodePressure ...
	I1216 20:59:50.938030   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:51.231412   60829 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1216 20:59:51.236005   60829 kubeadm.go:739] kubelet initialised
	I1216 20:59:51.236029   60829 kubeadm.go:740] duration metric: took 4.585977ms waiting for restarted kubelet to initialise ...
	I1216 20:59:51.236042   60829 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 20:59:51.243608   60829 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-tqh9s" in "kube-system" namespace to be "Ready" ...
	I1216 20:59:53.250907   60829 pod_ready.go:103] pod "coredns-668d6bf9bc-tqh9s" in "kube-system" namespace has status "Ready":"False"
	I1216 20:59:56.696377   60215 start.go:364] duration metric: took 54.44579772s to acquireMachinesLock for "embed-certs-606219"
	I1216 20:59:56.696450   60215 start.go:96] Skipping create...Using existing machine configuration
	I1216 20:59:56.696470   60215 fix.go:54] fixHost starting: 
	I1216 20:59:56.696862   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:59:56.696902   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:59:56.714627   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42069
	I1216 20:59:56.715074   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:59:56.715599   60215 main.go:141] libmachine: Using API Version  1
	I1216 20:59:56.715629   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:59:56.715953   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:59:56.716116   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 20:59:56.716252   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetState
	I1216 20:59:56.717876   60215 fix.go:112] recreateIfNeeded on embed-certs-606219: state=Stopped err=<nil>
	I1216 20:59:56.717902   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	W1216 20:59:56.718088   60215 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 20:59:56.720072   60215 out.go:177] * Restarting existing kvm2 VM for "embed-certs-606219" ...
	I1216 20:59:53.957328   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 20:59:53.957395   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:55.349557   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.350105   60933 main.go:141] libmachine: (old-k8s-version-847766) Found IP for machine: 192.168.72.240
	I1216 20:59:55.350129   60933 main.go:141] libmachine: (old-k8s-version-847766) Reserving static IP address...
	I1216 20:59:55.350140   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has current primary IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.350574   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "old-k8s-version-847766", mac: "52:54:00:c4:f2:8a", ip: "192.168.72.240"} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.350608   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | skip adding static IP to network mk-old-k8s-version-847766 - found existing host DHCP lease matching {name: "old-k8s-version-847766", mac: "52:54:00:c4:f2:8a", ip: "192.168.72.240"}
	I1216 20:59:55.350623   60933 main.go:141] libmachine: (old-k8s-version-847766) Reserved static IP address: 192.168.72.240
	I1216 20:59:55.350646   60933 main.go:141] libmachine: (old-k8s-version-847766) Waiting for SSH to be available...
	I1216 20:59:55.350662   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | Getting to WaitForSSH function...
	I1216 20:59:55.353011   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.353346   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.353369   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.353535   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | Using SSH client type: external
	I1216 20:59:55.353560   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | Using SSH private key: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa (-rw-------)
	I1216 20:59:55.353592   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1216 20:59:55.353606   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | About to run SSH command:
	I1216 20:59:55.353621   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | exit 0
	I1216 20:59:55.480726   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | SSH cmd err, output: <nil>: 
	I1216 20:59:55.481062   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetConfigRaw
	I1216 20:59:55.481692   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetIP
	I1216 20:59:55.484113   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.484500   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.484537   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.484769   60933 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/config.json ...
	I1216 20:59:55.484985   60933 machine.go:93] provisionDockerMachine start ...
	I1216 20:59:55.485008   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:55.485220   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:55.487511   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.487835   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.487862   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.487958   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:55.488134   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:55.488268   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:55.488405   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:55.488546   60933 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:55.488725   60933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I1216 20:59:55.488735   60933 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 20:59:55.596092   60933 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1216 20:59:55.596127   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetMachineName
	I1216 20:59:55.596401   60933 buildroot.go:166] provisioning hostname "old-k8s-version-847766"
	I1216 20:59:55.596426   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetMachineName
	I1216 20:59:55.596644   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:55.599286   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.599631   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.599662   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.599814   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:55.600010   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:55.600166   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:55.600299   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:55.600462   60933 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:55.600665   60933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I1216 20:59:55.600678   60933 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-847766 && echo "old-k8s-version-847766" | sudo tee /etc/hostname
	I1216 20:59:55.731851   60933 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-847766
	
	I1216 20:59:55.731879   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:55.734802   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.735155   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.735186   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.735422   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:55.735650   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:55.735815   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:55.736030   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:55.736194   60933 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:55.736377   60933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I1216 20:59:55.736393   60933 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-847766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-847766/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-847766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 20:59:55.857050   60933 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 20:59:55.857108   60933 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20091-7083/.minikube CaCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20091-7083/.minikube}
	I1216 20:59:55.857138   60933 buildroot.go:174] setting up certificates
	I1216 20:59:55.857163   60933 provision.go:84] configureAuth start
	I1216 20:59:55.857180   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetMachineName
	I1216 20:59:55.857505   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetIP
	I1216 20:59:55.860286   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.860613   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.860643   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.860826   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:55.863292   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.863682   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.863709   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.863871   60933 provision.go:143] copyHostCerts
	I1216 20:59:55.863920   60933 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem, removing ...
	I1216 20:59:55.863929   60933 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem
	I1216 20:59:55.863986   60933 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem (1679 bytes)
	I1216 20:59:55.864069   60933 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem, removing ...
	I1216 20:59:55.864077   60933 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem
	I1216 20:59:55.864104   60933 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem (1082 bytes)
	I1216 20:59:55.864159   60933 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem, removing ...
	I1216 20:59:55.864177   60933 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem
	I1216 20:59:55.864202   60933 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem (1123 bytes)
	I1216 20:59:55.864250   60933 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-847766 san=[127.0.0.1 192.168.72.240 localhost minikube old-k8s-version-847766]
	I1216 20:59:56.058548   60933 provision.go:177] copyRemoteCerts
	I1216 20:59:56.058603   60933 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 20:59:56.058638   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:56.061354   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.061666   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.061707   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.061838   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:56.062039   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.062200   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:56.062356   60933 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa Username:docker}
	I1216 20:59:56.146788   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 20:59:56.172789   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1216 20:59:56.198040   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 20:59:56.222476   60933 provision.go:87] duration metric: took 365.299433ms to configureAuth
	I1216 20:59:56.222505   60933 buildroot.go:189] setting minikube options for container-runtime
	I1216 20:59:56.222706   60933 config.go:182] Loaded profile config "old-k8s-version-847766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1216 20:59:56.222790   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:56.225376   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.225752   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.225779   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.225965   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:56.226182   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.226363   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.226516   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:56.226687   60933 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:56.226887   60933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I1216 20:59:56.226906   60933 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 20:59:56.451434   60933 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 20:59:56.451464   60933 machine.go:96] duration metric: took 966.463181ms to provisionDockerMachine
	I1216 20:59:56.451478   60933 start.go:293] postStartSetup for "old-k8s-version-847766" (driver="kvm2")
	I1216 20:59:56.451513   60933 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 20:59:56.451541   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:56.451926   60933 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 20:59:56.451980   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:56.454840   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.455302   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.455331   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.455454   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:56.455661   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.455814   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:56.455988   60933 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa Username:docker}
	I1216 20:59:56.542904   60933 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 20:59:56.547362   60933 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 20:59:56.547389   60933 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/addons for local assets ...
	I1216 20:59:56.547467   60933 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/files for local assets ...
	I1216 20:59:56.547568   60933 filesync.go:149] local asset: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem -> 142542.pem in /etc/ssl/certs
	I1216 20:59:56.547677   60933 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 20:59:56.557902   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /etc/ssl/certs/142542.pem (1708 bytes)
	I1216 20:59:56.582796   60933 start.go:296] duration metric: took 131.303406ms for postStartSetup
	I1216 20:59:56.582846   60933 fix.go:56] duration metric: took 19.442354832s for fixHost
	I1216 20:59:56.582872   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:56.585478   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.585803   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.585831   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.586011   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:56.586194   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.586358   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.586472   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:56.586640   60933 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:56.586809   60933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I1216 20:59:56.586819   60933 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 20:59:56.696254   60933 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734382796.650794736
	
	I1216 20:59:56.696274   60933 fix.go:216] guest clock: 1734382796.650794736
	I1216 20:59:56.696281   60933 fix.go:229] Guest: 2024-12-16 20:59:56.650794736 +0000 UTC Remote: 2024-12-16 20:59:56.582851742 +0000 UTC m=+262.230512454 (delta=67.942994ms)
	I1216 20:59:56.696299   60933 fix.go:200] guest clock delta is within tolerance: 67.942994ms
	I1216 20:59:56.696304   60933 start.go:83] releasing machines lock for "old-k8s-version-847766", held for 19.555844424s
	I1216 20:59:56.696333   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:56.696608   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetIP
	I1216 20:59:56.699486   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.699932   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.699964   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.700068   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:56.700645   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:56.700846   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:56.700948   60933 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 20:59:56.701007   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:56.701115   60933 ssh_runner.go:195] Run: cat /version.json
	I1216 20:59:56.701140   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:56.703937   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.704117   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.704314   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.704342   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.704496   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:56.704567   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.704601   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.704680   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.704762   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:56.704836   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:56.704982   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.704987   60933 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa Username:docker}
	I1216 20:59:56.705134   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:56.705259   60933 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa Username:docker}
	I1216 20:59:56.784295   60933 ssh_runner.go:195] Run: systemctl --version
	I1216 20:59:56.817481   60933 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 20:59:56.968124   60933 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 20:59:56.979827   60933 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 20:59:56.979892   60933 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 20:59:56.997867   60933 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 20:59:56.997891   60933 start.go:495] detecting cgroup driver to use...
	I1216 20:59:56.997954   60933 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 20:59:57.016064   60933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 20:59:57.031596   60933 docker.go:217] disabling cri-docker service (if available) ...
	I1216 20:59:57.031665   60933 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 20:59:57.047562   60933 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 20:59:57.062737   60933 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 20:59:57.183918   60933 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 20:59:57.354699   60933 docker.go:233] disabling docker service ...
	I1216 20:59:57.354794   60933 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 20:59:57.373311   60933 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 20:59:57.390014   60933 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 20:59:57.523623   60933 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 20:59:57.656261   60933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 20:59:57.671374   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 20:59:57.692647   60933 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1216 20:59:57.692709   60933 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:57.704496   60933 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 20:59:57.704548   60933 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:57.715848   60933 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:57.727022   60933 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:57.738899   60933 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 20:59:57.756457   60933 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 20:59:57.773236   60933 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 20:59:57.773289   60933 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 20:59:57.789209   60933 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 20:59:57.800881   60933 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:59:57.927794   60933 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 20:59:58.038173   60933 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 20:59:58.038256   60933 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 20:59:58.044633   60933 start.go:563] Will wait 60s for crictl version
	I1216 20:59:58.044705   60933 ssh_runner.go:195] Run: which crictl
	I1216 20:59:58.048781   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 20:59:58.088449   60933 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 20:59:58.088579   60933 ssh_runner.go:195] Run: crio --version
	I1216 20:59:58.119211   60933 ssh_runner.go:195] Run: crio --version
	I1216 20:59:58.151411   60933 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1216 20:59:58.152582   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetIP
	I1216 20:59:58.155196   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:58.155558   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:58.155587   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:58.155763   60933 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1216 20:59:58.160369   60933 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 20:59:58.174013   60933 kubeadm.go:883] updating cluster {Name:old-k8s-version-847766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-847766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 20:59:58.174155   60933 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1216 20:59:58.174212   60933 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 20:59:58.226674   60933 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1216 20:59:58.226747   60933 ssh_runner.go:195] Run: which lz4
	I1216 20:59:58.231330   60933 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 20:59:58.236178   60933 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 20:59:58.236214   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1216 20:59:56.721746   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Start
	I1216 20:59:56.721946   60215 main.go:141] libmachine: (embed-certs-606219) Ensuring networks are active...
	I1216 20:59:56.722810   60215 main.go:141] libmachine: (embed-certs-606219) Ensuring network default is active
	I1216 20:59:56.723209   60215 main.go:141] libmachine: (embed-certs-606219) Ensuring network mk-embed-certs-606219 is active
	I1216 20:59:56.723644   60215 main.go:141] libmachine: (embed-certs-606219) Getting domain xml...
	I1216 20:59:56.724387   60215 main.go:141] libmachine: (embed-certs-606219) Creating domain...
	I1216 20:59:58.005906   60215 main.go:141] libmachine: (embed-certs-606219) Waiting to get IP...
	I1216 20:59:58.006646   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 20:59:58.007021   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 20:59:58.007136   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 20:59:58.007017   62108 retry.go:31] will retry after 280.124694ms: waiting for machine to come up
	I1216 20:59:58.288552   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 20:59:58.289049   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 20:59:58.289078   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 20:59:58.289013   62108 retry.go:31] will retry after 299.873899ms: waiting for machine to come up
	I1216 20:59:58.590757   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 20:59:58.591593   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 20:59:58.591625   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 20:59:58.591487   62108 retry.go:31] will retry after 486.884982ms: waiting for machine to come up
	I1216 20:59:59.079996   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 20:59:59.080618   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 20:59:59.080649   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 20:59:59.080581   62108 retry.go:31] will retry after 608.856993ms: waiting for machine to come up
	I1216 20:59:59.691549   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 20:59:59.692107   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 20:59:59.692139   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 20:59:59.692064   62108 retry.go:31] will retry after 730.774006ms: waiting for machine to come up
	I1216 20:59:55.752607   60829 pod_ready.go:103] pod "coredns-668d6bf9bc-tqh9s" in "kube-system" namespace has status "Ready":"False"
	I1216 20:59:58.251902   60829 pod_ready.go:103] pod "coredns-668d6bf9bc-tqh9s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:00.254126   60829 pod_ready.go:103] pod "coredns-668d6bf9bc-tqh9s" in "kube-system" namespace has status "Ready":"False"
	I1216 20:59:58.958114   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 20:59:58.958161   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:59.567722   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": read tcp 192.168.50.1:38738->192.168.50.240:8443: read: connection reset by peer
	I1216 20:59:59.567773   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:59.568271   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": dial tcp 192.168.50.240:8443: connect: connection refused
	I1216 20:59:59.954745   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:59.955447   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": dial tcp 192.168.50.240:8443: connect: connection refused
	I1216 21:00:00.455116   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:00.456036   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": dial tcp 192.168.50.240:8443: connect: connection refused
	I1216 21:00:00.954418   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:00.100507   60933 crio.go:462] duration metric: took 1.869217257s to copy over tarball
	I1216 21:00:00.100619   60933 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 21:00:03.581430   60933 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.480755636s)
	I1216 21:00:03.581469   60933 crio.go:469] duration metric: took 3.480924144s to extract the tarball
	I1216 21:00:03.581478   60933 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 21:00:03.627932   60933 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 21:00:03.667985   60933 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1216 21:00:03.668013   60933 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1216 21:00:03.668078   60933 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 21:00:03.668110   60933 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:03.668207   60933 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:03.668262   60933 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1216 21:00:03.668262   60933 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:03.668332   60933 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1216 21:00:03.668215   60933 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:03.668092   60933 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:03.670096   60933 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:03.670294   60933 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:03.670305   60933 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:03.670305   60933 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:03.670333   60933 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1216 21:00:03.670394   60933 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1216 21:00:03.670396   60933 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 21:00:03.670467   60933 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:03.861573   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1216 21:00:03.869704   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:03.885911   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:03.904748   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1216 21:00:03.905328   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:03.906138   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:03.936548   60933 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1216 21:00:03.936658   60933 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1216 21:00:03.936736   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.019039   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:04.033811   60933 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1216 21:00:04.033863   60933 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:04.033927   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.082946   60933 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1216 21:00:04.082995   60933 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:04.083008   60933 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1216 21:00:04.083050   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.083055   60933 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1216 21:00:04.083063   60933 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1216 21:00:04.083073   60933 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:04.083133   60933 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1216 21:00:04.083203   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1216 21:00:04.083205   60933 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:04.083306   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.083145   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.083139   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.123434   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:04.123702   60933 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1216 21:00:04.123740   60933 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:04.123786   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.150878   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 21:00:04.155586   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:04.155774   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:04.155877   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1216 21:00:04.155968   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:04.156205   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1216 21:00:04.226110   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:04.226429   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:00.424272   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:00.424766   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:00.424795   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:00.424712   62108 retry.go:31] will retry after 947.177724ms: waiting for machine to come up
	I1216 21:00:01.373798   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:01.374448   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:01.374486   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:01.374376   62108 retry.go:31] will retry after 755.735247ms: waiting for machine to come up
	I1216 21:00:02.132092   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:02.132690   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:02.132716   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:02.132636   62108 retry.go:31] will retry after 1.25933291s: waiting for machine to come up
	I1216 21:00:03.393390   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:03.393951   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:03.393987   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:03.393887   62108 retry.go:31] will retry after 1.654271195s: waiting for machine to come up
	I1216 21:00:00.768561   60829 pod_ready.go:93] pod "coredns-668d6bf9bc-tqh9s" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:00.768603   60829 pod_ready.go:82] duration metric: took 9.524968022s for pod "coredns-668d6bf9bc-tqh9s" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:00.768619   60829 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:02.778467   60829 pod_ready.go:93] pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:02.778507   60829 pod_ready.go:82] duration metric: took 2.009878604s for pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:02.778523   60829 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:03.290454   60829 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:03.290490   60829 pod_ready.go:82] duration metric: took 511.956426ms for pod "kube-apiserver-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:03.290505   60829 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:04.533609   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 21:00:04.533639   60421 api_server.go:103] status: https://192.168.50.240:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 21:00:04.533655   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:04.679801   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 21:00:04.679836   60421 api_server.go:103] status: https://192.168.50.240:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 21:00:04.955306   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:05.723827   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 21:00:05.723870   60421 api_server.go:103] status: https://192.168.50.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 21:00:05.723892   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:05.750638   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 21:00:05.750674   60421 api_server.go:103] status: https://192.168.50.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 21:00:05.955092   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:05.983280   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 21:00:05.983332   60421 api_server.go:103] status: https://192.168.50.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 21:00:06.454742   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:06.467886   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 21:00:06.467924   60421 api_server.go:103] status: https://192.168.50.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 21:00:06.954428   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:06.960039   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 200:
	ok
	I1216 21:00:06.969187   60421 api_server.go:141] control plane version: v1.32.0
	I1216 21:00:06.969231   60421 api_server.go:131] duration metric: took 28.515011952s to wait for apiserver health ...
	I1216 21:00:06.969242   60421 cni.go:84] Creating CNI manager for ""
	I1216 21:00:06.969249   60421 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:00:06.971475   60421 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 21:00:06.973035   60421 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 21:00:06.992348   60421 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 21:00:07.020819   60421 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 21:00:07.035254   60421 system_pods.go:59] 8 kube-system pods found
	I1216 21:00:07.035308   60421 system_pods.go:61] "coredns-668d6bf9bc-snhjf" [c0cf42c8-521a-4d02-9d43-ff7a700b0eca] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:00:07.035321   60421 system_pods.go:61] "etcd-no-preload-232338" [01ca2051-5953-44fd-bfff-40aa16ec7aca] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 21:00:07.035335   60421 system_pods.go:61] "kube-apiserver-no-preload-232338" [f1fbbb3b-a0e5-4200-89ef-67085e51a31d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 21:00:07.035359   60421 system_pods.go:61] "kube-controller-manager-no-preload-232338" [200039ad-1a2c-4dc4-8307-d8c882d69f1b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 21:00:07.035373   60421 system_pods.go:61] "kube-proxy-5mw2b" [8fbddf14-8697-451a-a3c7-873fdd437247] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 21:00:07.035382   60421 system_pods.go:61] "kube-scheduler-no-preload-232338" [1b9a7a43-59fc-44ba-9863-04fb90e6554f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 21:00:07.035396   60421 system_pods.go:61] "metrics-server-f79f97bbb-5xf67" [447144e5-11d8-48f7-b2fd-7ab9fb3c04de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:00:07.035409   60421 system_pods.go:61] "storage-provisioner" [fb293bd2-f5be-4086-b821-ffd7df58dd5e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 21:00:07.035420   60421 system_pods.go:74] duration metric: took 14.571089ms to wait for pod list to return data ...
	I1216 21:00:07.035431   60421 node_conditions.go:102] verifying NodePressure condition ...
	I1216 21:00:07.044467   60421 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 21:00:07.044592   60421 node_conditions.go:123] node cpu capacity is 2
	I1216 21:00:07.044633   60421 node_conditions.go:105] duration metric: took 9.191874ms to run NodePressure ...
	I1216 21:00:07.044668   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:07.388388   60421 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1216 21:00:07.394851   60421 kubeadm.go:739] kubelet initialised
	I1216 21:00:07.394881   60421 kubeadm.go:740] duration metric: took 6.459945ms waiting for restarted kubelet to initialise ...
	I1216 21:00:07.394891   60421 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:00:07.401877   60421 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-snhjf" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:07.410697   60421 pod_ready.go:98] node "no-preload-232338" hosting pod "coredns-668d6bf9bc-snhjf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.410732   60421 pod_ready.go:82] duration metric: took 8.80876ms for pod "coredns-668d6bf9bc-snhjf" in "kube-system" namespace to be "Ready" ...
	E1216 21:00:07.410744   60421 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-232338" hosting pod "coredns-668d6bf9bc-snhjf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.410755   60421 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:07.418118   60421 pod_ready.go:98] node "no-preload-232338" hosting pod "etcd-no-preload-232338" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.418149   60421 pod_ready.go:82] duration metric: took 7.383445ms for pod "etcd-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	E1216 21:00:07.418163   60421 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-232338" hosting pod "etcd-no-preload-232338" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.418172   60421 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:07.427341   60421 pod_ready.go:98] node "no-preload-232338" hosting pod "kube-apiserver-no-preload-232338" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.427414   60421 pod_ready.go:82] duration metric: took 9.234588ms for pod "kube-apiserver-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	E1216 21:00:07.427424   60421 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-232338" hosting pod "kube-apiserver-no-preload-232338" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.427432   60421 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:07.435329   60421 pod_ready.go:98] node "no-preload-232338" hosting pod "kube-controller-manager-no-preload-232338" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.435378   60421 pod_ready.go:82] duration metric: took 7.931923ms for pod "kube-controller-manager-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	E1216 21:00:07.435392   60421 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-232338" hosting pod "kube-controller-manager-no-preload-232338" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.435408   60421 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5mw2b" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:04.457220   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:04.457399   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:04.457507   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:04.457596   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1216 21:00:04.457687   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1216 21:00:04.613834   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:04.613870   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:04.613923   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:04.613931   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:04.613960   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:04.613972   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1216 21:00:04.619915   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1216 21:00:04.791265   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1216 21:00:04.791297   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:04.791315   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1216 21:00:04.791352   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1216 21:00:04.791366   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1216 21:00:04.791384   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1216 21:00:04.836463   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1216 21:00:04.836536   60933 cache_images.go:92] duration metric: took 1.168508622s to LoadCachedImages
	W1216 21:00:04.836633   60933 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I1216 21:00:04.836649   60933 kubeadm.go:934] updating node { 192.168.72.240 8443 v1.20.0 crio true true} ...
	I1216 21:00:04.836781   60933 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-847766 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-847766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 21:00:04.836877   60933 ssh_runner.go:195] Run: crio config
	I1216 21:00:04.898330   60933 cni.go:84] Creating CNI manager for ""
	I1216 21:00:04.898357   60933 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:00:04.898371   60933 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 21:00:04.898396   60933 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.240 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-847766 NodeName:old-k8s-version-847766 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1216 21:00:04.898560   60933 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-847766"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.240
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.240"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 21:00:04.898643   60933 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1216 21:00:04.910946   60933 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 21:00:04.911045   60933 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 21:00:04.923199   60933 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1216 21:00:04.942705   60933 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 21:00:04.976598   60933 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1216 21:00:05.001967   60933 ssh_runner.go:195] Run: grep 192.168.72.240	control-plane.minikube.internal$ /etc/hosts
	I1216 21:00:05.006819   60933 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.240	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 21:00:05.020604   60933 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 21:00:05.143039   60933 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 21:00:05.162507   60933 certs.go:68] Setting up /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766 for IP: 192.168.72.240
	I1216 21:00:05.162535   60933 certs.go:194] generating shared ca certs ...
	I1216 21:00:05.162554   60933 certs.go:226] acquiring lock for ca certs: {Name:mk7f8f83a04be3d39897a025f51d4d8228b5a509 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:00:05.162749   60933 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key
	I1216 21:00:05.162792   60933 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key
	I1216 21:00:05.162803   60933 certs.go:256] generating profile certs ...
	I1216 21:00:05.162907   60933 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/client.key
	I1216 21:00:05.162976   60933 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.key.6c8704df
	I1216 21:00:05.163011   60933 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/proxy-client.key
	I1216 21:00:05.163148   60933 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem (1338 bytes)
	W1216 21:00:05.163176   60933 certs.go:480] ignoring /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254_empty.pem, impossibly tiny 0 bytes
	I1216 21:00:05.163186   60933 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 21:00:05.163210   60933 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem (1082 bytes)
	I1216 21:00:05.163278   60933 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem (1123 bytes)
	I1216 21:00:05.163315   60933 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem (1679 bytes)
	I1216 21:00:05.163379   60933 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem (1708 bytes)
	I1216 21:00:05.164216   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 21:00:05.222491   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 21:00:05.253517   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 21:00:05.294338   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 21:00:05.342850   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1216 21:00:05.388068   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 21:00:05.422591   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 21:00:05.471916   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 21:00:05.505836   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem --> /usr/share/ca-certificates/14254.pem (1338 bytes)
	I1216 21:00:05.539404   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /usr/share/ca-certificates/142542.pem (1708 bytes)
	I1216 21:00:05.570819   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 21:00:05.602079   60933 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 21:00:05.630577   60933 ssh_runner.go:195] Run: openssl version
	I1216 21:00:05.640017   60933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142542.pem && ln -fs /usr/share/ca-certificates/142542.pem /etc/ssl/certs/142542.pem"
	I1216 21:00:05.653759   60933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142542.pem
	I1216 21:00:05.659573   60933 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 19:42 /usr/share/ca-certificates/142542.pem
	I1216 21:00:05.659645   60933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142542.pem
	I1216 21:00:05.666667   60933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142542.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 21:00:05.680061   60933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 21:00:05.692776   60933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:00:05.698644   60933 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:00:05.698728   60933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:00:05.705913   60933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 21:00:05.730062   60933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14254.pem && ln -fs /usr/share/ca-certificates/14254.pem /etc/ssl/certs/14254.pem"
	I1216 21:00:05.750034   60933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14254.pem
	I1216 21:00:05.757158   60933 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 19:42 /usr/share/ca-certificates/14254.pem
	I1216 21:00:05.757252   60933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14254.pem
	I1216 21:00:05.765679   60933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14254.pem /etc/ssl/certs/51391683.0"
	I1216 21:00:05.782537   60933 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 21:00:05.788291   60933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 21:00:05.797707   60933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 21:00:05.807016   60933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 21:00:05.818160   60933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 21:00:05.827428   60933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 21:00:05.836499   60933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 21:00:05.846104   60933 kubeadm.go:392] StartCluster: {Name:old-k8s-version-847766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-847766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 21:00:05.846244   60933 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 21:00:05.846331   60933 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 21:00:05.901274   60933 cri.go:89] found id: ""
	I1216 21:00:05.901376   60933 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 21:00:05.917353   60933 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1216 21:00:05.917381   60933 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1216 21:00:05.917439   60933 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 21:00:05.928587   60933 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 21:00:05.932546   60933 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-847766" does not appear in /home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 21:00:05.933844   60933 kubeconfig.go:62] /home/jenkins/minikube-integration/20091-7083/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-847766" cluster setting kubeconfig missing "old-k8s-version-847766" context setting]
	I1216 21:00:05.935400   60933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/kubeconfig: {Name:mk67073c6dc9abd712825d4490d6430745897f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:00:05.938054   60933 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 21:00:05.950384   60933 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.240
	I1216 21:00:05.950433   60933 kubeadm.go:1160] stopping kube-system containers ...
	I1216 21:00:05.950450   60933 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 21:00:05.950515   60933 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 21:00:05.999495   60933 cri.go:89] found id: ""
	I1216 21:00:05.999588   60933 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 21:00:06.024765   60933 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:00:06.037807   60933 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:00:06.037836   60933 kubeadm.go:157] found existing configuration files:
	
	I1216 21:00:06.037894   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 21:00:06.048926   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:00:06.048997   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:00:06.060167   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 21:00:06.070841   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:00:06.070910   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:00:06.083517   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 21:00:06.099124   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:00:06.099214   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:00:06.110004   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 21:00:06.125600   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:00:06.125668   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:00:06.137212   60933 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 21:00:06.148873   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:06.316611   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:07.220187   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:07.549730   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:07.698864   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:07.815495   60933 api_server.go:52] waiting for apiserver process to appear ...
	I1216 21:00:07.815657   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:08.316003   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:08.816482   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:09.315805   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:05.050699   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:05.051378   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:05.051413   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:05.051296   62108 retry.go:31] will retry after 2.184829789s: waiting for machine to come up
	I1216 21:00:07.237618   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:07.238137   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:07.238166   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:07.238049   62108 retry.go:31] will retry after 2.531717629s: waiting for machine to come up
	I1216 21:00:05.713060   60829 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:05.798544   60829 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:05.798569   60829 pod_ready.go:82] duration metric: took 2.508055323s for pod "kube-controller-manager-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:05.798582   60829 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-mplxr" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:05.805322   60829 pod_ready.go:93] pod "kube-proxy-mplxr" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:05.805361   60829 pod_ready.go:82] duration metric: took 6.77ms for pod "kube-proxy-mplxr" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:05.805399   60829 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:05.812700   60829 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:05.812727   60829 pod_ready.go:82] duration metric: took 7.281992ms for pod "kube-scheduler-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:05.812741   60829 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:07.822004   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:10.321160   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:09.443582   60421 pod_ready.go:103] pod "kube-proxy-5mw2b" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:11.443796   60421 pod_ready.go:103] pod "kube-proxy-5mw2b" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:09.815863   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:10.316664   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:10.815852   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:11.316175   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:11.816446   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:12.316040   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:12.816172   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:13.316460   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:13.815700   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:14.316469   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:09.772318   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:09.772837   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:09.772869   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:09.772797   62108 retry.go:31] will retry after 2.557982234s: waiting for machine to come up
	I1216 21:00:12.331877   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:12.332340   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:12.332368   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:12.332298   62108 retry.go:31] will retry after 4.202991569s: waiting for machine to come up
	I1216 21:00:12.322897   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:14.323015   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:13.942154   60421 pod_ready.go:103] pod "kube-proxy-5mw2b" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:16.442411   60421 pod_ready.go:103] pod "kube-proxy-5mw2b" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:14.816539   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:15.315737   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:15.816465   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:16.316470   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:16.816451   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:17.316485   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:17.816470   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:18.316165   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:18.816448   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:19.315972   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:16.539792   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.540299   60215 main.go:141] libmachine: (embed-certs-606219) Found IP for machine: 192.168.61.151
	I1216 21:00:16.540324   60215 main.go:141] libmachine: (embed-certs-606219) Reserving static IP address...
	I1216 21:00:16.540341   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has current primary IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.540771   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "embed-certs-606219", mac: "52:54:00:63:37:8f", ip: "192.168.61.151"} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:16.540810   60215 main.go:141] libmachine: (embed-certs-606219) DBG | skip adding static IP to network mk-embed-certs-606219 - found existing host DHCP lease matching {name: "embed-certs-606219", mac: "52:54:00:63:37:8f", ip: "192.168.61.151"}
	I1216 21:00:16.540827   60215 main.go:141] libmachine: (embed-certs-606219) Reserved static IP address: 192.168.61.151
	I1216 21:00:16.540839   60215 main.go:141] libmachine: (embed-certs-606219) Waiting for SSH to be available...
	I1216 21:00:16.540847   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Getting to WaitForSSH function...
	I1216 21:00:16.542958   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.543461   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:16.543503   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.543629   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Using SSH client type: external
	I1216 21:00:16.543663   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Using SSH private key: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa (-rw-------)
	I1216 21:00:16.543696   60215 main.go:141] libmachine: (embed-certs-606219) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.151 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1216 21:00:16.543713   60215 main.go:141] libmachine: (embed-certs-606219) DBG | About to run SSH command:
	I1216 21:00:16.543732   60215 main.go:141] libmachine: (embed-certs-606219) DBG | exit 0
	I1216 21:00:16.671576   60215 main.go:141] libmachine: (embed-certs-606219) DBG | SSH cmd err, output: <nil>: 
	I1216 21:00:16.671965   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetConfigRaw
	I1216 21:00:16.672599   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetIP
	I1216 21:00:16.675179   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.675520   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:16.675549   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.675726   60215 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/config.json ...
	I1216 21:00:16.675938   60215 machine.go:93] provisionDockerMachine start ...
	I1216 21:00:16.675955   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:00:16.676186   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:16.678481   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.678824   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:16.678846   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.679020   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:16.679203   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:16.679388   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:16.679530   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:16.679689   60215 main.go:141] libmachine: Using SSH client type: native
	I1216 21:00:16.679883   60215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I1216 21:00:16.679896   60215 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 21:00:16.791925   60215 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1216 21:00:16.791959   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetMachineName
	I1216 21:00:16.792224   60215 buildroot.go:166] provisioning hostname "embed-certs-606219"
	I1216 21:00:16.792261   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetMachineName
	I1216 21:00:16.792492   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:16.794967   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.795359   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:16.795388   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.795496   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:16.795674   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:16.795845   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:16.795995   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:16.796238   60215 main.go:141] libmachine: Using SSH client type: native
	I1216 21:00:16.796466   60215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I1216 21:00:16.796486   60215 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-606219 && echo "embed-certs-606219" | sudo tee /etc/hostname
	I1216 21:00:16.923887   60215 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-606219
	
	I1216 21:00:16.923922   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:16.926689   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.927228   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:16.927283   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.927500   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:16.927724   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:16.927943   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:16.928139   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:16.928396   60215 main.go:141] libmachine: Using SSH client type: native
	I1216 21:00:16.928574   60215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I1216 21:00:16.928590   60215 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-606219' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-606219/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-606219' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 21:00:17.045462   60215 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 21:00:17.045508   60215 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20091-7083/.minikube CaCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20091-7083/.minikube}
	I1216 21:00:17.045540   60215 buildroot.go:174] setting up certificates
	I1216 21:00:17.045560   60215 provision.go:84] configureAuth start
	I1216 21:00:17.045578   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetMachineName
	I1216 21:00:17.045889   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetIP
	I1216 21:00:17.048733   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.049038   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:17.049062   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.049216   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:17.051371   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.051713   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:17.051748   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.051861   60215 provision.go:143] copyHostCerts
	I1216 21:00:17.051940   60215 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem, removing ...
	I1216 21:00:17.051954   60215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem
	I1216 21:00:17.052033   60215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem (1082 bytes)
	I1216 21:00:17.052187   60215 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem, removing ...
	I1216 21:00:17.052203   60215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem
	I1216 21:00:17.052230   60215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem (1123 bytes)
	I1216 21:00:17.052306   60215 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem, removing ...
	I1216 21:00:17.052317   60215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem
	I1216 21:00:17.052342   60215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem (1679 bytes)
	I1216 21:00:17.052413   60215 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem org=jenkins.embed-certs-606219 san=[127.0.0.1 192.168.61.151 embed-certs-606219 localhost minikube]
	I1216 21:00:17.345020   60215 provision.go:177] copyRemoteCerts
	I1216 21:00:17.345079   60215 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 21:00:17.345116   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:17.348019   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.348323   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:17.348350   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.348554   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:17.348783   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:17.348931   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:17.349093   60215 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 21:00:17.434520   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 21:00:17.462097   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1216 21:00:17.488071   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 21:00:17.516428   60215 provision.go:87] duration metric: took 470.851303ms to configureAuth
	I1216 21:00:17.516461   60215 buildroot.go:189] setting minikube options for container-runtime
	I1216 21:00:17.516673   60215 config.go:182] Loaded profile config "embed-certs-606219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 21:00:17.516763   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:17.519637   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.519981   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:17.520019   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.520229   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:17.520451   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:17.520654   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:17.520813   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:17.520977   60215 main.go:141] libmachine: Using SSH client type: native
	I1216 21:00:17.521148   60215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I1216 21:00:17.521166   60215 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 21:00:17.787052   60215 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 21:00:17.787084   60215 machine.go:96] duration metric: took 1.111132885s to provisionDockerMachine
	I1216 21:00:17.787111   60215 start.go:293] postStartSetup for "embed-certs-606219" (driver="kvm2")
	I1216 21:00:17.787126   60215 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 21:00:17.787145   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:00:17.787551   60215 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 21:00:17.787588   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:17.790332   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.790710   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:17.790743   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.790891   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:17.791130   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:17.791336   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:17.791492   60215 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 21:00:17.881548   60215 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 21:00:17.886692   60215 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 21:00:17.886720   60215 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/addons for local assets ...
	I1216 21:00:17.886788   60215 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/files for local assets ...
	I1216 21:00:17.886886   60215 filesync.go:149] local asset: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem -> 142542.pem in /etc/ssl/certs
	I1216 21:00:17.886983   60215 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 21:00:17.897832   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /etc/ssl/certs/142542.pem (1708 bytes)
	I1216 21:00:17.926273   60215 start.go:296] duration metric: took 139.147156ms for postStartSetup
	I1216 21:00:17.926316   60215 fix.go:56] duration metric: took 21.229856025s for fixHost
	I1216 21:00:17.926338   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:17.929204   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.929600   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:17.929623   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.929809   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:17.930036   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:17.930220   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:17.930411   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:17.930554   60215 main.go:141] libmachine: Using SSH client type: native
	I1216 21:00:17.930723   60215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I1216 21:00:17.930734   60215 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 21:00:18.040530   60215 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734382817.988837134
	
	I1216 21:00:18.040557   60215 fix.go:216] guest clock: 1734382817.988837134
	I1216 21:00:18.040590   60215 fix.go:229] Guest: 2024-12-16 21:00:17.988837134 +0000 UTC Remote: 2024-12-16 21:00:17.926320778 +0000 UTC m=+358.266755361 (delta=62.516356ms)
	I1216 21:00:18.040639   60215 fix.go:200] guest clock delta is within tolerance: 62.516356ms
	I1216 21:00:18.040650   60215 start.go:83] releasing machines lock for "embed-certs-606219", held for 21.34422537s
	I1216 21:00:18.040682   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:00:18.040997   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetIP
	I1216 21:00:18.044100   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:18.044549   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:18.044584   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:18.044727   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:00:18.045237   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:00:18.045454   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:00:18.045544   60215 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 21:00:18.045602   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:18.045673   60215 ssh_runner.go:195] Run: cat /version.json
	I1216 21:00:18.045702   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:18.048852   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:18.049066   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:18.049259   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:18.049285   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:18.049423   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:18.049578   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:18.049610   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:18.049611   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:18.049688   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:18.049885   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:18.049908   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:18.050090   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:18.050082   60215 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 21:00:18.050313   60215 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 21:00:18.128381   60215 ssh_runner.go:195] Run: systemctl --version
	I1216 21:00:18.165162   60215 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 21:00:18.313679   60215 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 21:00:18.321330   60215 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 21:00:18.321407   60215 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 21:00:18.340577   60215 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 21:00:18.340601   60215 start.go:495] detecting cgroup driver to use...
	I1216 21:00:18.340672   60215 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 21:00:18.357273   60215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 21:00:18.373169   60215 docker.go:217] disabling cri-docker service (if available) ...
	I1216 21:00:18.373231   60215 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 21:00:18.387904   60215 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 21:00:18.402499   60215 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 21:00:18.528830   60215 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 21:00:18.677746   60215 docker.go:233] disabling docker service ...
	I1216 21:00:18.677839   60215 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 21:00:18.693059   60215 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 21:00:18.707368   60215 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 21:00:18.870936   60215 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 21:00:19.011321   60215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 21:00:19.025645   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 21:00:19.045618   60215 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1216 21:00:19.045695   60215 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:00:19.056739   60215 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 21:00:19.056813   60215 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:00:19.067975   60215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:00:19.078954   60215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:00:19.090165   60215 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 21:00:19.101906   60215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:00:19.112949   60215 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:00:19.131186   60215 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:00:19.142238   60215 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 21:00:19.152768   60215 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 21:00:19.152830   60215 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 21:00:19.169166   60215 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 21:00:19.188991   60215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 21:00:19.319083   60215 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 21:00:19.427266   60215 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 21:00:19.427377   60215 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 21:00:19.432716   60215 start.go:563] Will wait 60s for crictl version
	I1216 21:00:19.432793   60215 ssh_runner.go:195] Run: which crictl
	I1216 21:00:19.437514   60215 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 21:00:19.484613   60215 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 21:00:19.484726   60215 ssh_runner.go:195] Run: crio --version
	I1216 21:00:19.519451   60215 ssh_runner.go:195] Run: crio --version
	I1216 21:00:19.555298   60215 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1216 21:00:19.556696   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetIP
	I1216 21:00:19.559802   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:19.560178   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:19.560201   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:19.560467   60215 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1216 21:00:19.565180   60215 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 21:00:19.579863   60215 kubeadm.go:883] updating cluster {Name:embed-certs-606219 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.32.0 ClusterName:embed-certs-606219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.151 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 21:00:19.579991   60215 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 21:00:19.580037   60215 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 21:00:19.618480   60215 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1216 21:00:19.618556   60215 ssh_runner.go:195] Run: which lz4
	I1216 21:00:19.622839   60215 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 21:00:19.627438   60215 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 21:00:19.627482   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1216 21:00:16.819610   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:19.326427   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:17.942107   60421 pod_ready.go:93] pod "kube-proxy-5mw2b" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:17.942148   60421 pod_ready.go:82] duration metric: took 10.506728599s for pod "kube-proxy-5mw2b" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:17.942161   60421 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:17.948518   60421 pod_ready.go:93] pod "kube-scheduler-no-preload-232338" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:17.948540   60421 pod_ready.go:82] duration metric: took 6.372903ms for pod "kube-scheduler-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:17.948549   60421 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:19.956992   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:21.957271   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:19.815807   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:20.316465   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:20.816461   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:21.316731   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:21.816637   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:22.315727   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:22.816447   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:23.316510   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:23.816408   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:24.316454   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:21.237863   60215 crio.go:462] duration metric: took 1.615059209s to copy over tarball
	I1216 21:00:21.237956   60215 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 21:00:23.572502   60215 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.33450798s)
	I1216 21:00:23.572535   60215 crio.go:469] duration metric: took 2.334633133s to extract the tarball
	I1216 21:00:23.572549   60215 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 21:00:23.613530   60215 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 21:00:23.667777   60215 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 21:00:23.667807   60215 cache_images.go:84] Images are preloaded, skipping loading
	I1216 21:00:23.667815   60215 kubeadm.go:934] updating node { 192.168.61.151 8443 v1.32.0 crio true true} ...
	I1216 21:00:23.667929   60215 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-606219 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.151
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:embed-certs-606219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 21:00:23.668009   60215 ssh_runner.go:195] Run: crio config
	I1216 21:00:23.716162   60215 cni.go:84] Creating CNI manager for ""
	I1216 21:00:23.716184   60215 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:00:23.716192   60215 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 21:00:23.716211   60215 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.151 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-606219 NodeName:embed-certs-606219 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.151"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.151 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 21:00:23.716337   60215 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.151
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-606219"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.151"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.151"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 21:00:23.716393   60215 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1216 21:00:23.727236   60215 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 21:00:23.727337   60215 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 21:00:23.737632   60215 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1216 21:00:23.757380   60215 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 21:00:23.774863   60215 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I1216 21:00:23.795070   60215 ssh_runner.go:195] Run: grep 192.168.61.151	control-plane.minikube.internal$ /etc/hosts
	I1216 21:00:23.799453   60215 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.151	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 21:00:23.814278   60215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 21:00:23.962200   60215 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 21:00:23.981947   60215 certs.go:68] Setting up /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219 for IP: 192.168.61.151
	I1216 21:00:23.981976   60215 certs.go:194] generating shared ca certs ...
	I1216 21:00:23.981999   60215 certs.go:226] acquiring lock for ca certs: {Name:mk7f8f83a04be3d39897a025f51d4d8228b5a509 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:00:23.982156   60215 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key
	I1216 21:00:23.982197   60215 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key
	I1216 21:00:23.982204   60215 certs.go:256] generating profile certs ...
	I1216 21:00:23.982280   60215 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/client.key
	I1216 21:00:23.982336   60215 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/apiserver.key.b346be49
	I1216 21:00:23.982376   60215 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/proxy-client.key
	I1216 21:00:23.982483   60215 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem (1338 bytes)
	W1216 21:00:23.982513   60215 certs.go:480] ignoring /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254_empty.pem, impossibly tiny 0 bytes
	I1216 21:00:23.982523   60215 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 21:00:23.982555   60215 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem (1082 bytes)
	I1216 21:00:23.982582   60215 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem (1123 bytes)
	I1216 21:00:23.982602   60215 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem (1679 bytes)
	I1216 21:00:23.982655   60215 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem (1708 bytes)
	I1216 21:00:23.983524   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 21:00:24.015369   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 21:00:24.043889   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 21:00:24.087807   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 21:00:24.137438   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1216 21:00:24.174859   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 21:00:24.200220   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 21:00:24.225811   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 21:00:24.251567   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /usr/share/ca-certificates/142542.pem (1708 bytes)
	I1216 21:00:24.276737   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 21:00:24.302541   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem --> /usr/share/ca-certificates/14254.pem (1338 bytes)
	I1216 21:00:24.329876   60215 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 21:00:24.350133   60215 ssh_runner.go:195] Run: openssl version
	I1216 21:00:24.356984   60215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142542.pem && ln -fs /usr/share/ca-certificates/142542.pem /etc/ssl/certs/142542.pem"
	I1216 21:00:24.371219   60215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142542.pem
	I1216 21:00:24.376759   60215 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 19:42 /usr/share/ca-certificates/142542.pem
	I1216 21:00:24.376816   60215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142542.pem
	I1216 21:00:24.383725   60215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142542.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 21:00:24.397759   60215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 21:00:24.409836   60215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:00:24.414765   60215 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:00:24.414836   60215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:00:24.421662   60215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 21:00:24.433843   60215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14254.pem && ln -fs /usr/share/ca-certificates/14254.pem /etc/ssl/certs/14254.pem"
	I1216 21:00:24.447839   60215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14254.pem
	I1216 21:00:24.453107   60215 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 19:42 /usr/share/ca-certificates/14254.pem
	I1216 21:00:24.453185   60215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14254.pem
	I1216 21:00:24.459472   60215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14254.pem /etc/ssl/certs/51391683.0"
	I1216 21:00:24.471714   60215 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 21:00:24.476881   60215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 21:00:24.486263   60215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 21:00:24.493146   60215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 21:00:24.500093   60215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 21:00:24.506599   60215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 21:00:24.512946   60215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 21:00:24.519699   60215 kubeadm.go:392] StartCluster: {Name:embed-certs-606219 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32
.0 ClusterName:embed-certs-606219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.151 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 21:00:24.519780   60215 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 21:00:24.519861   60215 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 21:00:24.570867   60215 cri.go:89] found id: ""
	I1216 21:00:24.570952   60215 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 21:00:24.583857   60215 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1216 21:00:24.583887   60215 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1216 21:00:24.583943   60215 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 21:00:24.595709   60215 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 21:00:24.596734   60215 kubeconfig.go:125] found "embed-certs-606219" server: "https://192.168.61.151:8443"
	I1216 21:00:24.598569   60215 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 21:00:24.609876   60215 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.151
	I1216 21:00:24.609905   60215 kubeadm.go:1160] stopping kube-system containers ...
	I1216 21:00:24.609917   60215 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 21:00:24.609964   60215 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 21:00:24.654487   60215 cri.go:89] found id: ""
	I1216 21:00:24.654567   60215 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 21:00:24.676658   60215 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:00:24.689546   60215 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:00:24.689571   60215 kubeadm.go:157] found existing configuration files:
	
	I1216 21:00:24.689615   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 21:00:21.819876   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:23.820061   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:23.957368   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:26.556301   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:24.816467   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:25.315789   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:25.816410   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:26.316537   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:26.816144   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:27.316659   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:27.816126   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:28.316568   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:28.816151   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:29.316485   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:24.700928   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:00:24.701012   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:00:24.713438   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 21:00:24.725184   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:00:24.725257   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:00:24.737483   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 21:00:24.749488   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:00:24.749546   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:00:24.762322   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 21:00:24.774309   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:00:24.774391   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:00:24.787008   60215 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 21:00:24.798394   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:25.009799   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:25.917432   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:26.175602   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:26.279646   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:26.362472   60215 api_server.go:52] waiting for apiserver process to appear ...
	I1216 21:00:26.362564   60215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:26.862646   60215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:27.362663   60215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:27.421335   60215 api_server.go:72] duration metric: took 1.058863872s to wait for apiserver process to appear ...
	I1216 21:00:27.421361   60215 api_server.go:88] waiting for apiserver healthz status ...
	I1216 21:00:27.421380   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:00:27.421869   60215 api_server.go:269] stopped: https://192.168.61.151:8443/healthz: Get "https://192.168.61.151:8443/healthz": dial tcp 192.168.61.151:8443: connect: connection refused
	I1216 21:00:27.921493   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:00:26.471175   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:28.819200   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:30.365380   60215 api_server.go:279] https://192.168.61.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 21:00:30.365410   60215 api_server.go:103] status: https://192.168.61.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 21:00:30.365425   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:00:30.416044   60215 api_server.go:279] https://192.168.61.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 21:00:30.416078   60215 api_server.go:103] status: https://192.168.61.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 21:00:30.422219   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:00:30.432135   60215 api_server.go:279] https://192.168.61.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 21:00:30.432161   60215 api_server.go:103] status: https://192.168.61.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 21:00:30.921790   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:00:30.929160   60215 api_server.go:279] https://192.168.61.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 21:00:30.929192   60215 api_server.go:103] status: https://192.168.61.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 21:00:31.421708   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:00:31.432805   60215 api_server.go:279] https://192.168.61.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 21:00:31.432839   60215 api_server.go:103] status: https://192.168.61.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 21:00:31.922000   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:00:31.933658   60215 api_server.go:279] https://192.168.61.151:8443/healthz returned 200:
	ok
	I1216 21:00:31.945496   60215 api_server.go:141] control plane version: v1.32.0
	I1216 21:00:31.945534   60215 api_server.go:131] duration metric: took 4.524165612s to wait for apiserver health ...
	I1216 21:00:31.945546   60215 cni.go:84] Creating CNI manager for ""
	I1216 21:00:31.945555   60215 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:00:31.947456   60215 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 21:00:28.954572   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:30.955397   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:29.816510   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:30.315756   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:30.815774   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:31.316516   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:31.816503   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:32.316499   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:32.816455   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:33.316478   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:33.816363   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:34.316057   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:31.948727   60215 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 21:00:31.977877   60215 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 21:00:32.014745   60215 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 21:00:32.027268   60215 system_pods.go:59] 8 kube-system pods found
	I1216 21:00:32.027303   60215 system_pods.go:61] "coredns-668d6bf9bc-rp29f" [0135dcef-2324-49ec-b459-f34b73efd82b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:00:32.027311   60215 system_pods.go:61] "etcd-embed-certs-606219" [05f01ef3-5d92-4d16-9643-0f56df3869f6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 21:00:32.027320   60215 system_pods.go:61] "kube-apiserver-embed-certs-606219" [4294c469-e47a-4722-a620-92c33d23b41e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 21:00:32.027326   60215 system_pods.go:61] "kube-controller-manager-embed-certs-606219" [cc8452e6-ca00-44dd-8d77-897df20d37f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 21:00:32.027354   60215 system_pods.go:61] "kube-proxy-8t495" [492be5cc-7d3a-4983-9bc7-14091bef7b43] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 21:00:32.027362   60215 system_pods.go:61] "kube-scheduler-embed-certs-606219" [63c42d73-a17a-4b37-a585-f7db5923c493] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 21:00:32.027376   60215 system_pods.go:61] "metrics-server-f79f97bbb-d6gmd" [50916d48-ee33-4e96-9507-c486d8ac7f7d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:00:32.027387   60215 system_pods.go:61] "storage-provisioner" [1164651f-c3b5-445f-882a-60eb2f2fb3f8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 21:00:32.027399   60215 system_pods.go:74] duration metric: took 12.633182ms to wait for pod list to return data ...
	I1216 21:00:32.027409   60215 node_conditions.go:102] verifying NodePressure condition ...
	I1216 21:00:32.041648   60215 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 21:00:32.041677   60215 node_conditions.go:123] node cpu capacity is 2
	I1216 21:00:32.041686   60215 node_conditions.go:105] duration metric: took 14.27317ms to run NodePressure ...
	I1216 21:00:32.041704   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:32.492472   60215 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1216 21:00:32.504237   60215 kubeadm.go:739] kubelet initialised
	I1216 21:00:32.504271   60215 kubeadm.go:740] duration metric: took 11.772175ms waiting for restarted kubelet to initialise ...
	I1216 21:00:32.504282   60215 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:00:32.525531   60215 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:34.531954   60215 pod_ready.go:103] pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:31.321998   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:33.325288   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:32.959143   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:35.454928   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:37.455474   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:34.815839   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:35.316503   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:35.816590   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:36.316231   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:36.816011   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:37.316485   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:37.816494   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:38.316486   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:38.816475   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:39.315762   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:36.534516   60215 pod_ready.go:103] pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:39.032255   60215 pod_ready.go:103] pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:35.819575   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:38.322139   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:40.322804   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:39.456089   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:41.955128   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:39.816009   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:40.316444   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:40.816493   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:41.315869   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:41.816495   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:42.316034   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:42.816422   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:43.316432   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:43.815875   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:44.316036   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:41.032545   60215 pod_ready.go:103] pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:43.534471   60215 pod_ready.go:103] pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:42.819610   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:44.820561   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:43.955190   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:46.455540   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:44.816293   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:45.316458   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:45.815992   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:46.316054   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:46.816449   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:47.316113   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:47.816514   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:48.316353   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:48.816144   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:49.316435   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:45.031682   60215 pod_ready.go:93] pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:45.031705   60215 pod_ready.go:82] duration metric: took 12.506146086s for pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.031715   60215 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.038109   60215 pod_ready.go:93] pod "etcd-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:45.038138   60215 pod_ready.go:82] duration metric: took 6.416609ms for pod "etcd-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.038149   60215 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.043764   60215 pod_ready.go:93] pod "kube-apiserver-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:45.043784   60215 pod_ready.go:82] duration metric: took 5.621982ms for pod "kube-apiserver-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.043793   60215 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.053376   60215 pod_ready.go:93] pod "kube-controller-manager-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:45.053399   60215 pod_ready.go:82] duration metric: took 9.600142ms for pod "kube-controller-manager-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.053409   60215 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-8t495" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.058956   60215 pod_ready.go:93] pod "kube-proxy-8t495" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:45.058976   60215 pod_ready.go:82] duration metric: took 5.561188ms for pod "kube-proxy-8t495" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.058984   60215 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.429908   60215 pod_ready.go:93] pod "kube-scheduler-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:45.429932   60215 pod_ready.go:82] duration metric: took 370.942192ms for pod "kube-scheduler-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.429942   60215 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:47.438759   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:47.323605   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:49.819763   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:48.456270   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:50.955190   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:49.815935   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:50.316437   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:50.816335   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:51.315747   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:51.816504   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:52.315695   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:52.816115   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:53.316498   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:53.816529   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:54.315689   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:49.935961   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:51.937245   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:53.937302   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:51.820266   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:53.820748   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:52.956645   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:55.456064   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:54.816019   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:55.316484   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:55.816517   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:56.315858   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:56.816306   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:57.316447   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:57.815879   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:58.316493   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:58.816395   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:59.316225   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:56.437390   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:58.938617   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:56.323619   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:58.820330   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:57.956401   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:00.456844   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:02.457677   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:59.816440   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:00.315769   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:00.816285   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:01.316020   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:01.818175   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:02.315780   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:02.816411   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:03.315758   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:03.815810   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:04.316731   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:01.436856   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:03.436945   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:00.820484   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:03.323328   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:04.955714   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:07.455361   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:04.816470   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:05.316528   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:05.815792   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:06.316491   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:06.815977   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:07.316002   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:07.816043   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:07.816114   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:07.861866   60933 cri.go:89] found id: ""
	I1216 21:01:07.861896   60933 logs.go:282] 0 containers: []
	W1216 21:01:07.861906   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:07.861913   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:07.861978   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:07.905674   60933 cri.go:89] found id: ""
	I1216 21:01:07.905700   60933 logs.go:282] 0 containers: []
	W1216 21:01:07.905707   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:07.905713   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:07.905798   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:07.949936   60933 cri.go:89] found id: ""
	I1216 21:01:07.949966   60933 logs.go:282] 0 containers: []
	W1216 21:01:07.949977   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:07.949984   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:07.950048   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:07.987196   60933 cri.go:89] found id: ""
	I1216 21:01:07.987223   60933 logs.go:282] 0 containers: []
	W1216 21:01:07.987232   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:07.987237   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:07.987341   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:08.033126   60933 cri.go:89] found id: ""
	I1216 21:01:08.033156   60933 logs.go:282] 0 containers: []
	W1216 21:01:08.033168   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:08.033176   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:08.033252   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:08.072223   60933 cri.go:89] found id: ""
	I1216 21:01:08.072257   60933 logs.go:282] 0 containers: []
	W1216 21:01:08.072270   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:08.072278   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:08.072345   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:08.117257   60933 cri.go:89] found id: ""
	I1216 21:01:08.117288   60933 logs.go:282] 0 containers: []
	W1216 21:01:08.117299   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:08.117319   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:08.117389   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:08.158059   60933 cri.go:89] found id: ""
	I1216 21:01:08.158096   60933 logs.go:282] 0 containers: []
	W1216 21:01:08.158106   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:08.158119   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:08.158133   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:08.232930   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:08.232966   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:08.277173   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:08.277204   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:08.331763   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:08.331802   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:08.346150   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:08.346178   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:08.488668   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:05.437627   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:07.938294   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:05.820491   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:07.821058   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:10.322630   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:09.456101   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:11.461923   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:10.989383   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:11.003162   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:11.003266   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:11.040432   60933 cri.go:89] found id: ""
	I1216 21:01:11.040464   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.040475   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:11.040483   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:11.040547   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:11.083083   60933 cri.go:89] found id: ""
	I1216 21:01:11.083110   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.083117   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:11.083122   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:11.083182   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:11.122842   60933 cri.go:89] found id: ""
	I1216 21:01:11.122880   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.122893   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:11.122900   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:11.122969   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:11.168227   60933 cri.go:89] found id: ""
	I1216 21:01:11.168268   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.168279   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:11.168286   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:11.168359   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:11.218660   60933 cri.go:89] found id: ""
	I1216 21:01:11.218689   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.218701   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:11.218708   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:11.218774   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:11.281179   60933 cri.go:89] found id: ""
	I1216 21:01:11.281214   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.281227   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:11.281236   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:11.281315   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:11.326419   60933 cri.go:89] found id: ""
	I1216 21:01:11.326453   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.326464   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:11.326472   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:11.326535   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:11.368825   60933 cri.go:89] found id: ""
	I1216 21:01:11.368863   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.368875   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:11.368887   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:11.368905   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:11.454848   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:11.454869   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:11.454888   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:11.541685   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:11.541724   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:11.581804   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:11.581830   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:11.635800   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:11.635838   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:14.152441   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:14.167637   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:14.167720   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:14.206685   60933 cri.go:89] found id: ""
	I1216 21:01:14.206716   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.206728   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:14.206735   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:14.206796   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:14.248126   60933 cri.go:89] found id: ""
	I1216 21:01:14.248151   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.248159   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:14.248165   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:14.248215   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:14.285030   60933 cri.go:89] found id: ""
	I1216 21:01:14.285067   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.285079   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:14.285086   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:14.285151   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:14.325706   60933 cri.go:89] found id: ""
	I1216 21:01:14.325736   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.325747   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:14.325755   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:14.325820   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:14.369447   60933 cri.go:89] found id: ""
	I1216 21:01:14.369475   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.369486   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:14.369494   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:14.369557   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:10.437872   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:12.937013   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:12.820480   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:15.319910   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:13.959919   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:16.458101   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:14.407792   60933 cri.go:89] found id: ""
	I1216 21:01:14.407818   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.407826   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:14.407832   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:14.407890   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:14.448380   60933 cri.go:89] found id: ""
	I1216 21:01:14.448411   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.448419   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:14.448424   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:14.448473   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:14.487116   60933 cri.go:89] found id: ""
	I1216 21:01:14.487144   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.487154   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:14.487164   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:14.487177   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:14.547342   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:14.547390   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:14.563385   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:14.563424   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:14.637363   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:14.637394   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:14.637410   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:14.715586   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:14.715626   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:17.258974   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:17.273896   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:17.273970   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:17.317359   60933 cri.go:89] found id: ""
	I1216 21:01:17.317394   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.317405   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:17.317412   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:17.317476   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:17.361422   60933 cri.go:89] found id: ""
	I1216 21:01:17.361451   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.361462   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:17.361469   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:17.361568   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:17.401466   60933 cri.go:89] found id: ""
	I1216 21:01:17.401522   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.401534   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:17.401544   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:17.401614   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:17.439560   60933 cri.go:89] found id: ""
	I1216 21:01:17.439588   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.439597   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:17.439603   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:17.439655   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:17.480310   60933 cri.go:89] found id: ""
	I1216 21:01:17.480333   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.480340   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:17.480345   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:17.480393   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:17.528562   60933 cri.go:89] found id: ""
	I1216 21:01:17.528589   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.528600   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:17.528607   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:17.528671   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:17.569863   60933 cri.go:89] found id: ""
	I1216 21:01:17.569900   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.569908   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:17.569914   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:17.569975   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:17.610840   60933 cri.go:89] found id: ""
	I1216 21:01:17.610867   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.610875   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:17.610884   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:17.610895   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:17.661002   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:17.661041   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:17.675290   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:17.675318   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:17.743550   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:17.743572   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:17.743584   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:17.824479   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:17.824524   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:15.437260   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:17.937487   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:17.324337   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:19.819325   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:18.956605   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:20.957030   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:20.373687   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:20.389149   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:20.389244   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:20.429594   60933 cri.go:89] found id: ""
	I1216 21:01:20.429626   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.429634   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:20.429639   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:20.429693   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:20.473157   60933 cri.go:89] found id: ""
	I1216 21:01:20.473185   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.473193   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:20.473198   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:20.473264   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:20.512549   60933 cri.go:89] found id: ""
	I1216 21:01:20.512586   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.512597   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:20.512604   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:20.512676   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:20.549275   60933 cri.go:89] found id: ""
	I1216 21:01:20.549310   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.549323   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:20.549344   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:20.549408   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:20.587405   60933 cri.go:89] found id: ""
	I1216 21:01:20.587435   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.587443   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:20.587449   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:20.587515   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:20.625364   60933 cri.go:89] found id: ""
	I1216 21:01:20.625393   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.625400   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:20.625406   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:20.625456   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:20.664018   60933 cri.go:89] found id: ""
	I1216 21:01:20.664050   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.664061   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:20.664068   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:20.664117   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:20.703860   60933 cri.go:89] found id: ""
	I1216 21:01:20.703890   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.703898   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:20.703906   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:20.703918   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:20.754433   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:20.754470   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:20.770136   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:20.770172   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:20.854025   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:20.854049   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:20.854061   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:20.939628   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:20.939661   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:23.489645   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:23.503603   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:23.503667   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:23.543044   60933 cri.go:89] found id: ""
	I1216 21:01:23.543070   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.543077   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:23.543083   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:23.543131   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:23.580333   60933 cri.go:89] found id: ""
	I1216 21:01:23.580362   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.580371   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:23.580377   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:23.580428   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:23.616732   60933 cri.go:89] found id: ""
	I1216 21:01:23.616766   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.616778   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:23.616785   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:23.616834   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:23.655771   60933 cri.go:89] found id: ""
	I1216 21:01:23.655793   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.655801   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:23.655807   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:23.655861   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:23.694400   60933 cri.go:89] found id: ""
	I1216 21:01:23.694430   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.694437   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:23.694443   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:23.694500   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:23.732592   60933 cri.go:89] found id: ""
	I1216 21:01:23.732622   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.732630   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:23.732636   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:23.732688   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:23.769752   60933 cri.go:89] found id: ""
	I1216 21:01:23.769787   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.769801   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:23.769810   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:23.769892   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:23.806891   60933 cri.go:89] found id: ""
	I1216 21:01:23.806925   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.806936   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:23.806947   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:23.806963   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:23.822887   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:23.822912   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:23.898795   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:23.898817   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:23.898830   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:23.978036   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:23.978073   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:24.032500   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:24.032528   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:20.437888   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:22.936895   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:21.819859   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:23.820383   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:23.456331   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:25.960513   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:26.585937   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:26.599470   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:26.599543   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:26.635421   60933 cri.go:89] found id: ""
	I1216 21:01:26.635446   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.635455   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:26.635461   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:26.635527   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:26.675347   60933 cri.go:89] found id: ""
	I1216 21:01:26.675379   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.675390   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:26.675397   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:26.675464   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:26.715444   60933 cri.go:89] found id: ""
	I1216 21:01:26.715469   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.715480   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:26.715541   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:26.715619   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:26.753841   60933 cri.go:89] found id: ""
	I1216 21:01:26.753874   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.753893   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:26.753901   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:26.753963   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:26.791427   60933 cri.go:89] found id: ""
	I1216 21:01:26.791453   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.791464   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:26.791473   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:26.791539   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:26.832772   60933 cri.go:89] found id: ""
	I1216 21:01:26.832804   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.832816   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:26.832823   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:26.832887   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:26.869963   60933 cri.go:89] found id: ""
	I1216 21:01:26.869990   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.869997   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:26.870003   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:26.870068   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:26.906792   60933 cri.go:89] found id: ""
	I1216 21:01:26.906821   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.906862   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:26.906875   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:26.906894   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:26.994820   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:26.994863   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:27.034642   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:27.034686   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:27.089128   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:27.089168   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:27.104368   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:27.104401   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:27.179852   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:25.436696   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:27.937229   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:26.319568   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:28.820132   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:28.454880   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:30.455734   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:29.681052   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:29.695376   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:29.695464   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:29.735562   60933 cri.go:89] found id: ""
	I1216 21:01:29.735588   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.735596   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:29.735602   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:29.735650   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:29.772635   60933 cri.go:89] found id: ""
	I1216 21:01:29.772663   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.772672   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:29.772678   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:29.772737   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:29.810471   60933 cri.go:89] found id: ""
	I1216 21:01:29.810499   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.810509   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:29.810516   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:29.810575   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:29.845917   60933 cri.go:89] found id: ""
	I1216 21:01:29.845952   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.845966   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:29.845975   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:29.846048   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:29.883866   60933 cri.go:89] found id: ""
	I1216 21:01:29.883892   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.883900   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:29.883906   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:29.883968   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:29.920696   60933 cri.go:89] found id: ""
	I1216 21:01:29.920729   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.920740   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:29.920748   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:29.920831   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:29.957977   60933 cri.go:89] found id: ""
	I1216 21:01:29.958056   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.958069   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:29.958079   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:29.958144   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:29.995436   60933 cri.go:89] found id: ""
	I1216 21:01:29.995464   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.995472   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:29.995481   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:29.995492   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:30.046819   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:30.046859   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:30.062754   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:30.062807   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:30.138932   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:30.138959   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:30.138975   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:30.225720   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:30.225768   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:32.768185   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:32.782642   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:32.782729   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:32.821995   60933 cri.go:89] found id: ""
	I1216 21:01:32.822029   60933 logs.go:282] 0 containers: []
	W1216 21:01:32.822040   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:32.822048   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:32.822112   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:32.858453   60933 cri.go:89] found id: ""
	I1216 21:01:32.858487   60933 logs.go:282] 0 containers: []
	W1216 21:01:32.858497   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:32.858504   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:32.858570   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:32.896269   60933 cri.go:89] found id: ""
	I1216 21:01:32.896304   60933 logs.go:282] 0 containers: []
	W1216 21:01:32.896316   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:32.896323   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:32.896384   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:32.936795   60933 cri.go:89] found id: ""
	I1216 21:01:32.936820   60933 logs.go:282] 0 containers: []
	W1216 21:01:32.936832   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:32.936838   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:32.936904   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:32.974779   60933 cri.go:89] found id: ""
	I1216 21:01:32.974810   60933 logs.go:282] 0 containers: []
	W1216 21:01:32.974821   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:32.974828   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:32.974892   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:33.012201   60933 cri.go:89] found id: ""
	I1216 21:01:33.012226   60933 logs.go:282] 0 containers: []
	W1216 21:01:33.012234   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:33.012239   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:33.012287   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:33.049777   60933 cri.go:89] found id: ""
	I1216 21:01:33.049803   60933 logs.go:282] 0 containers: []
	W1216 21:01:33.049811   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:33.049816   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:33.049873   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:33.087820   60933 cri.go:89] found id: ""
	I1216 21:01:33.087851   60933 logs.go:282] 0 containers: []
	W1216 21:01:33.087859   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:33.087870   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:33.087885   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:33.140816   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:33.140854   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:33.154817   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:33.154855   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:33.231445   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:33.231474   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:33.231496   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:33.311547   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:33.311586   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:29.938045   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:32.436934   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:34.444209   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:31.321180   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:33.324091   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:32.956028   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:35.454994   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:37.455094   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:35.855686   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:35.870404   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:35.870485   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:35.908175   60933 cri.go:89] found id: ""
	I1216 21:01:35.908204   60933 logs.go:282] 0 containers: []
	W1216 21:01:35.908215   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:35.908222   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:35.908284   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:35.955456   60933 cri.go:89] found id: ""
	I1216 21:01:35.955483   60933 logs.go:282] 0 containers: []
	W1216 21:01:35.955494   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:35.955501   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:35.955562   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:35.995170   60933 cri.go:89] found id: ""
	I1216 21:01:35.995201   60933 logs.go:282] 0 containers: []
	W1216 21:01:35.995211   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:35.995218   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:35.995305   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:36.033729   60933 cri.go:89] found id: ""
	I1216 21:01:36.033758   60933 logs.go:282] 0 containers: []
	W1216 21:01:36.033769   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:36.033776   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:36.033840   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:36.072756   60933 cri.go:89] found id: ""
	I1216 21:01:36.072787   60933 logs.go:282] 0 containers: []
	W1216 21:01:36.072799   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:36.072806   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:36.072873   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:36.112149   60933 cri.go:89] found id: ""
	I1216 21:01:36.112187   60933 logs.go:282] 0 containers: []
	W1216 21:01:36.112198   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:36.112205   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:36.112258   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:36.148742   60933 cri.go:89] found id: ""
	I1216 21:01:36.148770   60933 logs.go:282] 0 containers: []
	W1216 21:01:36.148781   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:36.148789   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:36.148855   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:36.192827   60933 cri.go:89] found id: ""
	I1216 21:01:36.192864   60933 logs.go:282] 0 containers: []
	W1216 21:01:36.192875   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:36.192886   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:36.192901   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:36.243822   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:36.243867   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:36.258258   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:36.258292   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:36.342847   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:36.342876   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:36.342891   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:36.424741   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:36.424780   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:38.967334   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:38.982208   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:38.982283   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:39.023903   60933 cri.go:89] found id: ""
	I1216 21:01:39.023931   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.023939   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:39.023945   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:39.023997   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:39.070314   60933 cri.go:89] found id: ""
	I1216 21:01:39.070342   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.070351   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:39.070359   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:39.070423   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:39.115081   60933 cri.go:89] found id: ""
	I1216 21:01:39.115106   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.115113   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:39.115119   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:39.115178   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:39.151933   60933 cri.go:89] found id: ""
	I1216 21:01:39.151959   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.151967   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:39.151972   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:39.152022   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:39.192280   60933 cri.go:89] found id: ""
	I1216 21:01:39.192307   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.192315   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:39.192322   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:39.192370   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:39.228792   60933 cri.go:89] found id: ""
	I1216 21:01:39.228814   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.228822   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:39.228827   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:39.228887   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:39.266823   60933 cri.go:89] found id: ""
	I1216 21:01:39.266847   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.266854   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:39.266860   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:39.266908   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:39.301317   60933 cri.go:89] found id: ""
	I1216 21:01:39.301340   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.301348   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:39.301361   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:39.301372   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:39.386615   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:39.386663   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:36.936376   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:38.936968   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:35.820025   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:37.820396   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:40.319915   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:39.457790   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:41.955758   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:39.433079   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:39.433112   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:39.489422   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:39.489458   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:39.504223   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:39.504259   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:39.587898   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:42.088900   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:42.103768   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:42.103854   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:42.141956   60933 cri.go:89] found id: ""
	I1216 21:01:42.142026   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.142040   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:42.142049   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:42.142117   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:42.178754   60933 cri.go:89] found id: ""
	I1216 21:01:42.178782   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.178818   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:42.178833   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:42.178891   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:42.215872   60933 cri.go:89] found id: ""
	I1216 21:01:42.215905   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.215916   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:42.215923   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:42.215991   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:42.253854   60933 cri.go:89] found id: ""
	I1216 21:01:42.253885   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.253896   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:42.253904   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:42.253972   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:42.290963   60933 cri.go:89] found id: ""
	I1216 21:01:42.291008   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.291023   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:42.291039   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:42.291109   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:42.332920   60933 cri.go:89] found id: ""
	I1216 21:01:42.332946   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.332953   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:42.332959   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:42.333006   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:42.375060   60933 cri.go:89] found id: ""
	I1216 21:01:42.375093   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.375104   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:42.375112   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:42.375189   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:42.416593   60933 cri.go:89] found id: ""
	I1216 21:01:42.416621   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.416631   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:42.416639   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:42.416651   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:42.475204   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:42.475260   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:42.491022   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:42.491057   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:42.566645   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:42.566672   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:42.566687   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:42.646815   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:42.646856   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:41.436872   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:43.936734   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:42.321709   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:44.321985   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:43.955807   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:46.455508   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:45.191912   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:45.205487   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:45.205548   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:45.245350   60933 cri.go:89] found id: ""
	I1216 21:01:45.245389   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.245397   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:45.245404   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:45.245482   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:45.302126   60933 cri.go:89] found id: ""
	I1216 21:01:45.302158   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.302171   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:45.302178   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:45.302251   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:45.342888   60933 cri.go:89] found id: ""
	I1216 21:01:45.342917   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.342932   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:45.342937   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:45.342990   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:45.381545   60933 cri.go:89] found id: ""
	I1216 21:01:45.381574   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.381585   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:45.381593   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:45.381652   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:45.418081   60933 cri.go:89] found id: ""
	I1216 21:01:45.418118   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.418131   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:45.418138   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:45.418207   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:45.458610   60933 cri.go:89] found id: ""
	I1216 21:01:45.458637   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.458647   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:45.458655   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:45.458713   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:45.500102   60933 cri.go:89] found id: ""
	I1216 21:01:45.500137   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.500148   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:45.500155   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:45.500217   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:45.542074   60933 cri.go:89] found id: ""
	I1216 21:01:45.542103   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.542113   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:45.542122   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:45.542134   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:45.597577   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:45.597614   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:45.614028   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:45.614075   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:45.693014   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:45.693039   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:45.693056   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:45.772260   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:45.772295   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:48.317073   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:48.332176   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:48.332242   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:48.369946   60933 cri.go:89] found id: ""
	I1216 21:01:48.369976   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.369988   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:48.369994   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:48.370059   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:48.407628   60933 cri.go:89] found id: ""
	I1216 21:01:48.407661   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.407672   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:48.407680   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:48.407742   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:48.444377   60933 cri.go:89] found id: ""
	I1216 21:01:48.444403   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.444411   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:48.444416   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:48.444467   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:48.485674   60933 cri.go:89] found id: ""
	I1216 21:01:48.485710   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.485722   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:48.485730   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:48.485785   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:48.530577   60933 cri.go:89] found id: ""
	I1216 21:01:48.530610   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.530621   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:48.530628   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:48.530693   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:48.567128   60933 cri.go:89] found id: ""
	I1216 21:01:48.567151   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.567159   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:48.567165   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:48.567216   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:48.603294   60933 cri.go:89] found id: ""
	I1216 21:01:48.603320   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.603327   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:48.603333   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:48.603392   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:48.646221   60933 cri.go:89] found id: ""
	I1216 21:01:48.646253   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.646265   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:48.646288   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:48.646318   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:48.697589   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:48.697624   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:48.711916   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:48.711947   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:48.789068   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:48.789097   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:48.789113   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:48.872340   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:48.872378   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:45.937806   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:48.437160   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:46.819986   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:48.821079   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:48.456975   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:50.956101   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:51.418176   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:51.434851   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:51.434948   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:51.478935   60933 cri.go:89] found id: ""
	I1216 21:01:51.478963   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.478975   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:51.478982   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:51.479043   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:51.524581   60933 cri.go:89] found id: ""
	I1216 21:01:51.524611   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.524622   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:51.524629   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:51.524686   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:51.563479   60933 cri.go:89] found id: ""
	I1216 21:01:51.563507   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.563516   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:51.563521   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:51.563578   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:51.601931   60933 cri.go:89] found id: ""
	I1216 21:01:51.601964   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.601975   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:51.601982   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:51.602044   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:51.638984   60933 cri.go:89] found id: ""
	I1216 21:01:51.639014   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.639025   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:51.639032   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:51.639093   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:51.681137   60933 cri.go:89] found id: ""
	I1216 21:01:51.681167   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.681178   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:51.681185   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:51.681263   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:51.722904   60933 cri.go:89] found id: ""
	I1216 21:01:51.722932   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.722941   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:51.722946   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:51.722994   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:51.794403   60933 cri.go:89] found id: ""
	I1216 21:01:51.794434   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.794444   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:51.794453   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:51.794464   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:51.850688   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:51.850724   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:51.866049   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:51.866079   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:51.949844   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:51.949880   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:51.949894   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:52.028981   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:52.029023   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:50.936202   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:52.936839   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:51.321959   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:53.819864   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:53.455360   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:55.954957   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:54.570192   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:54.585405   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:54.585489   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:54.627670   60933 cri.go:89] found id: ""
	I1216 21:01:54.627701   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.627712   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:54.627719   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:54.627782   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:54.671226   60933 cri.go:89] found id: ""
	I1216 21:01:54.671265   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.671276   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:54.671283   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:54.671337   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:54.705549   60933 cri.go:89] found id: ""
	I1216 21:01:54.705581   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.705592   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:54.705600   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:54.705663   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:54.743638   60933 cri.go:89] found id: ""
	I1216 21:01:54.743664   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.743671   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:54.743677   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:54.743728   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:54.781714   60933 cri.go:89] found id: ""
	I1216 21:01:54.781750   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.781760   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:54.781767   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:54.781831   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:54.830808   60933 cri.go:89] found id: ""
	I1216 21:01:54.830840   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.830851   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:54.830859   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:54.830923   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:54.868539   60933 cri.go:89] found id: ""
	I1216 21:01:54.868565   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.868573   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:54.868578   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:54.868626   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:54.906554   60933 cri.go:89] found id: ""
	I1216 21:01:54.906587   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.906595   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:54.906604   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:54.906617   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:54.960664   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:54.960696   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:54.975657   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:54.975686   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:55.052266   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:55.052293   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:55.052320   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:55.137894   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:55.137937   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:57.682769   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:57.699102   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:57.699184   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:57.764651   60933 cri.go:89] found id: ""
	I1216 21:01:57.764684   60933 logs.go:282] 0 containers: []
	W1216 21:01:57.764692   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:57.764698   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:57.764755   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:57.805358   60933 cri.go:89] found id: ""
	I1216 21:01:57.805385   60933 logs.go:282] 0 containers: []
	W1216 21:01:57.805395   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:57.805404   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:57.805474   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:57.843589   60933 cri.go:89] found id: ""
	I1216 21:01:57.843623   60933 logs.go:282] 0 containers: []
	W1216 21:01:57.843634   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:57.843644   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:57.843716   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:57.881725   60933 cri.go:89] found id: ""
	I1216 21:01:57.881748   60933 logs.go:282] 0 containers: []
	W1216 21:01:57.881756   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:57.881761   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:57.881811   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:57.922252   60933 cri.go:89] found id: ""
	I1216 21:01:57.922293   60933 logs.go:282] 0 containers: []
	W1216 21:01:57.922305   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:57.922322   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:57.922385   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:57.962532   60933 cri.go:89] found id: ""
	I1216 21:01:57.962555   60933 logs.go:282] 0 containers: []
	W1216 21:01:57.962562   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:57.962567   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:57.962615   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:58.002021   60933 cri.go:89] found id: ""
	I1216 21:01:58.002056   60933 logs.go:282] 0 containers: []
	W1216 21:01:58.002067   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:58.002074   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:58.002137   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:58.035648   60933 cri.go:89] found id: ""
	I1216 21:01:58.035672   60933 logs.go:282] 0 containers: []
	W1216 21:01:58.035680   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:58.035688   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:58.035699   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:58.116142   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:58.116177   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:58.157683   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:58.157717   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:58.211686   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:58.211722   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:58.226385   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:58.226409   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:58.302287   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:54.937208   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:57.437396   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:59.438489   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:56.326836   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:58.818671   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:57.955980   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:00.455212   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:00.802544   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:00.816325   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:00.816405   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:00.853031   60933 cri.go:89] found id: ""
	I1216 21:02:00.853057   60933 logs.go:282] 0 containers: []
	W1216 21:02:00.853065   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:00.853070   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:00.853122   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:00.891040   60933 cri.go:89] found id: ""
	I1216 21:02:00.891071   60933 logs.go:282] 0 containers: []
	W1216 21:02:00.891082   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:00.891089   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:00.891151   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:00.929145   60933 cri.go:89] found id: ""
	I1216 21:02:00.929168   60933 logs.go:282] 0 containers: []
	W1216 21:02:00.929175   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:00.929181   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:00.929227   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:00.976469   60933 cri.go:89] found id: ""
	I1216 21:02:00.976492   60933 logs.go:282] 0 containers: []
	W1216 21:02:00.976500   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:00.976505   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:00.976553   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:01.015053   60933 cri.go:89] found id: ""
	I1216 21:02:01.015078   60933 logs.go:282] 0 containers: []
	W1216 21:02:01.015086   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:01.015092   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:01.015150   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:01.052859   60933 cri.go:89] found id: ""
	I1216 21:02:01.052891   60933 logs.go:282] 0 containers: []
	W1216 21:02:01.052902   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:01.052909   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:01.053028   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:01.091209   60933 cri.go:89] found id: ""
	I1216 21:02:01.091238   60933 logs.go:282] 0 containers: []
	W1216 21:02:01.091259   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:01.091266   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:01.091341   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:01.127013   60933 cri.go:89] found id: ""
	I1216 21:02:01.127038   60933 logs.go:282] 0 containers: []
	W1216 21:02:01.127047   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:01.127058   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:01.127072   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:01.179642   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:01.179697   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:01.196390   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:01.196416   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:01.275446   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:01.275478   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:01.275493   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:01.354391   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:01.354429   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:03.897672   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:03.911596   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:03.911654   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:03.955700   60933 cri.go:89] found id: ""
	I1216 21:02:03.955726   60933 logs.go:282] 0 containers: []
	W1216 21:02:03.955735   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:03.955741   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:03.955803   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:03.995661   60933 cri.go:89] found id: ""
	I1216 21:02:03.995696   60933 logs.go:282] 0 containers: []
	W1216 21:02:03.995706   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:03.995713   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:03.995772   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:04.031368   60933 cri.go:89] found id: ""
	I1216 21:02:04.031391   60933 logs.go:282] 0 containers: []
	W1216 21:02:04.031398   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:04.031406   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:04.031455   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:04.067633   60933 cri.go:89] found id: ""
	I1216 21:02:04.067659   60933 logs.go:282] 0 containers: []
	W1216 21:02:04.067666   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:04.067671   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:04.067719   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:04.105734   60933 cri.go:89] found id: ""
	I1216 21:02:04.105758   60933 logs.go:282] 0 containers: []
	W1216 21:02:04.105768   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:04.105773   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:04.105824   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:04.146542   60933 cri.go:89] found id: ""
	I1216 21:02:04.146564   60933 logs.go:282] 0 containers: []
	W1216 21:02:04.146571   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:04.146577   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:04.146623   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:04.184433   60933 cri.go:89] found id: ""
	I1216 21:02:04.184462   60933 logs.go:282] 0 containers: []
	W1216 21:02:04.184473   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:04.184480   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:04.184551   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:04.223077   60933 cri.go:89] found id: ""
	I1216 21:02:04.223106   60933 logs.go:282] 0 containers: []
	W1216 21:02:04.223117   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:04.223127   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:04.223140   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:04.279618   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:04.279656   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:04.295841   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:04.295865   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:04.372609   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:04.372632   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:04.372648   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:01.937175   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:03.937249   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:00.819801   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:02.820563   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:05.320087   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:02.955461   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:05.455023   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:07.456981   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:04.457597   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:04.457631   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:07.006004   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:07.020394   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:07.020537   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:07.064242   60933 cri.go:89] found id: ""
	I1216 21:02:07.064274   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.064283   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:07.064289   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:07.064337   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:07.108865   60933 cri.go:89] found id: ""
	I1216 21:02:07.108899   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.108910   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:07.108917   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:07.108985   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:07.149021   60933 cri.go:89] found id: ""
	I1216 21:02:07.149051   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.149060   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:07.149066   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:07.149120   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:07.187808   60933 cri.go:89] found id: ""
	I1216 21:02:07.187833   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.187843   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:07.187850   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:07.187912   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:07.228748   60933 cri.go:89] found id: ""
	I1216 21:02:07.228774   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.228785   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:07.228792   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:07.228853   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:07.267961   60933 cri.go:89] found id: ""
	I1216 21:02:07.267996   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.268012   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:07.268021   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:07.268099   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:07.312464   60933 cri.go:89] found id: ""
	I1216 21:02:07.312491   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.312498   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:07.312503   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:07.312554   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:07.351902   60933 cri.go:89] found id: ""
	I1216 21:02:07.351933   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.351946   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:07.351958   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:07.351974   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:07.405985   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:07.406050   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:07.420796   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:07.420842   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:07.506527   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:07.506559   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:07.506574   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:07.587965   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:07.588001   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:06.437434   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:08.937843   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:07.320229   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:09.819940   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:09.954900   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:11.955004   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:10.132876   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:10.146785   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:10.146858   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:10.189278   60933 cri.go:89] found id: ""
	I1216 21:02:10.189312   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.189324   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:10.189332   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:10.189402   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:10.228331   60933 cri.go:89] found id: ""
	I1216 21:02:10.228370   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.228378   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:10.228383   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:10.228436   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:10.266424   60933 cri.go:89] found id: ""
	I1216 21:02:10.266458   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.266470   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:10.266478   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:10.266542   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:10.305865   60933 cri.go:89] found id: ""
	I1216 21:02:10.305890   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.305902   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:10.305909   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:10.305968   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:10.344211   60933 cri.go:89] found id: ""
	I1216 21:02:10.344239   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.344247   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:10.344253   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:10.344314   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:10.381939   60933 cri.go:89] found id: ""
	I1216 21:02:10.381993   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.382004   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:10.382011   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:10.382076   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:10.418882   60933 cri.go:89] found id: ""
	I1216 21:02:10.418908   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.418915   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:10.418921   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:10.418972   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:10.458397   60933 cri.go:89] found id: ""
	I1216 21:02:10.458425   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.458434   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:10.458447   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:10.458462   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:10.472152   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:10.472180   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:10.545888   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:10.545913   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:10.545926   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:10.627223   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:10.627293   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:10.676606   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:10.676633   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:13.227283   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:13.242871   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:13.242954   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:13.280676   60933 cri.go:89] found id: ""
	I1216 21:02:13.280711   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.280723   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:13.280731   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:13.280786   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:13.321357   60933 cri.go:89] found id: ""
	I1216 21:02:13.321389   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.321400   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:13.321408   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:13.321474   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:13.359002   60933 cri.go:89] found id: ""
	I1216 21:02:13.359030   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.359042   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:13.359050   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:13.359116   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:13.395879   60933 cri.go:89] found id: ""
	I1216 21:02:13.395922   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.395941   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:13.395950   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:13.396017   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:13.436761   60933 cri.go:89] found id: ""
	I1216 21:02:13.436781   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.436788   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:13.436793   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:13.436852   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:13.478839   60933 cri.go:89] found id: ""
	I1216 21:02:13.478869   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.478877   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:13.478883   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:13.478947   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:13.520013   60933 cri.go:89] found id: ""
	I1216 21:02:13.520037   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.520044   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:13.520050   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:13.520124   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:13.556973   60933 cri.go:89] found id: ""
	I1216 21:02:13.557001   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.557013   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:13.557023   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:13.557039   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:13.613499   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:13.613537   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:13.628689   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:13.628724   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:13.706556   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:13.706576   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:13.706589   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:13.786379   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:13.786419   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:11.436179   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:13.436800   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:11.820109   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:13.820778   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:14.457666   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:16.955591   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:16.333578   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:16.347948   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:16.348020   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:16.386928   60933 cri.go:89] found id: ""
	I1216 21:02:16.386955   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.386963   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:16.386969   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:16.387033   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:16.425192   60933 cri.go:89] found id: ""
	I1216 21:02:16.425253   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.425265   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:16.425273   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:16.425355   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:16.465522   60933 cri.go:89] found id: ""
	I1216 21:02:16.465554   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.465565   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:16.465573   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:16.465638   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:16.504567   60933 cri.go:89] found id: ""
	I1216 21:02:16.504605   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.504616   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:16.504624   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:16.504694   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:16.541823   60933 cri.go:89] found id: ""
	I1216 21:02:16.541852   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.541864   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:16.541872   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:16.541942   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:16.580898   60933 cri.go:89] found id: ""
	I1216 21:02:16.580927   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.580938   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:16.580946   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:16.581003   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:16.626006   60933 cri.go:89] found id: ""
	I1216 21:02:16.626036   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.626046   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:16.626053   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:16.626109   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:16.662686   60933 cri.go:89] found id: ""
	I1216 21:02:16.662712   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.662719   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:16.662728   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:16.662740   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:16.717939   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:16.717978   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:16.733431   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:16.733466   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:16.807379   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:16.807409   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:16.807421   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:16.896455   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:16.896492   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:15.437791   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:17.935778   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:16.321167   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:18.819624   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:18.955621   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:20.956220   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:19.442959   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:19.458684   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:19.458749   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:19.499907   60933 cri.go:89] found id: ""
	I1216 21:02:19.499938   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.499947   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:19.499954   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:19.500002   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:19.538010   60933 cri.go:89] found id: ""
	I1216 21:02:19.538035   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.538043   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:19.538049   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:19.538148   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:19.577097   60933 cri.go:89] found id: ""
	I1216 21:02:19.577131   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.577139   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:19.577145   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:19.577196   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:19.617288   60933 cri.go:89] found id: ""
	I1216 21:02:19.617316   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.617326   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:19.617332   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:19.617392   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:19.658066   60933 cri.go:89] found id: ""
	I1216 21:02:19.658090   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.658097   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:19.658103   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:19.658153   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:19.696077   60933 cri.go:89] found id: ""
	I1216 21:02:19.696108   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.696121   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:19.696131   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:19.696189   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:19.737657   60933 cri.go:89] found id: ""
	I1216 21:02:19.737692   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.737704   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:19.737712   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:19.737776   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:19.778699   60933 cri.go:89] found id: ""
	I1216 21:02:19.778729   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.778738   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:19.778746   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:19.778757   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:19.841941   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:19.841979   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:19.857752   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:19.857788   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:19.935980   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:19.936004   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:19.936020   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:20.019999   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:20.020046   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:22.566398   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:22.580376   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:22.580472   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:22.620240   60933 cri.go:89] found id: ""
	I1216 21:02:22.620273   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.620284   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:22.620292   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:22.620355   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:22.656413   60933 cri.go:89] found id: ""
	I1216 21:02:22.656444   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.656455   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:22.656463   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:22.656531   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:22.690956   60933 cri.go:89] found id: ""
	I1216 21:02:22.690978   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.690986   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:22.690992   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:22.691040   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:22.734851   60933 cri.go:89] found id: ""
	I1216 21:02:22.734885   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.734895   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:22.734903   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:22.734969   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:22.774416   60933 cri.go:89] found id: ""
	I1216 21:02:22.774450   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.774461   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:22.774467   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:22.774535   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:22.811162   60933 cri.go:89] found id: ""
	I1216 21:02:22.811192   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.811204   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:22.811212   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:22.811296   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:22.851955   60933 cri.go:89] found id: ""
	I1216 21:02:22.851980   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.851987   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:22.851993   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:22.852051   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:22.888699   60933 cri.go:89] found id: ""
	I1216 21:02:22.888725   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.888736   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:22.888747   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:22.888769   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:22.944065   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:22.944100   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:22.960842   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:22.960872   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:23.036229   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:23.036251   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:23.036263   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:23.122493   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:23.122535   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:19.936687   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:21.937222   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:24.437190   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:20.820544   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:22.820771   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:25.319776   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:22.956523   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:25.456180   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:25.667995   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:25.682152   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:25.682222   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:25.719092   60933 cri.go:89] found id: ""
	I1216 21:02:25.719120   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.719130   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:25.719135   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:25.719190   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:25.757668   60933 cri.go:89] found id: ""
	I1216 21:02:25.757702   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.757712   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:25.757720   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:25.757791   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:25.809743   60933 cri.go:89] found id: ""
	I1216 21:02:25.809776   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.809787   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:25.809795   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:25.809857   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:25.849181   60933 cri.go:89] found id: ""
	I1216 21:02:25.849211   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.849222   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:25.849230   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:25.849295   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:25.891032   60933 cri.go:89] found id: ""
	I1216 21:02:25.891079   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.891091   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:25.891098   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:25.891169   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:25.930549   60933 cri.go:89] found id: ""
	I1216 21:02:25.930575   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.930583   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:25.930589   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:25.930639   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:25.971709   60933 cri.go:89] found id: ""
	I1216 21:02:25.971736   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.971744   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:25.971749   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:25.971797   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:26.007728   60933 cri.go:89] found id: ""
	I1216 21:02:26.007760   60933 logs.go:282] 0 containers: []
	W1216 21:02:26.007769   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:26.007778   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:26.007791   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:26.059710   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:26.059752   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:26.074596   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:26.074627   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:26.145892   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:26.145913   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:26.145924   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:26.225961   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:26.226000   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:28.772974   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:28.787001   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:28.787078   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:28.828176   60933 cri.go:89] found id: ""
	I1216 21:02:28.828206   60933 logs.go:282] 0 containers: []
	W1216 21:02:28.828214   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:28.828223   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:28.828292   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:28.872750   60933 cri.go:89] found id: ""
	I1216 21:02:28.872781   60933 logs.go:282] 0 containers: []
	W1216 21:02:28.872792   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:28.872798   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:28.872859   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:28.914844   60933 cri.go:89] found id: ""
	I1216 21:02:28.914871   60933 logs.go:282] 0 containers: []
	W1216 21:02:28.914879   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:28.914884   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:28.914934   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:28.953541   60933 cri.go:89] found id: ""
	I1216 21:02:28.953569   60933 logs.go:282] 0 containers: []
	W1216 21:02:28.953579   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:28.953587   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:28.953647   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:28.992768   60933 cri.go:89] found id: ""
	I1216 21:02:28.992797   60933 logs.go:282] 0 containers: []
	W1216 21:02:28.992808   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:28.992816   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:28.992882   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:29.030069   60933 cri.go:89] found id: ""
	I1216 21:02:29.030102   60933 logs.go:282] 0 containers: []
	W1216 21:02:29.030113   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:29.030121   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:29.030187   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:29.068629   60933 cri.go:89] found id: ""
	I1216 21:02:29.068658   60933 logs.go:282] 0 containers: []
	W1216 21:02:29.068666   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:29.068677   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:29.068726   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:29.103664   60933 cri.go:89] found id: ""
	I1216 21:02:29.103697   60933 logs.go:282] 0 containers: []
	W1216 21:02:29.103708   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:29.103719   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:29.103732   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:29.151225   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:29.151276   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:29.209448   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:29.209499   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:29.225232   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:29.225257   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:29.309812   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:29.309832   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:29.309846   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:26.937193   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:28.937302   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:27.320052   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:29.820220   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:27.956244   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:29.957111   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:32.456969   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:31.896263   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:31.912378   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:31.912455   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:31.950479   60933 cri.go:89] found id: ""
	I1216 21:02:31.950508   60933 logs.go:282] 0 containers: []
	W1216 21:02:31.950527   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:31.950535   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:31.950600   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:31.990479   60933 cri.go:89] found id: ""
	I1216 21:02:31.990504   60933 logs.go:282] 0 containers: []
	W1216 21:02:31.990515   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:31.990533   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:31.990599   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:32.032808   60933 cri.go:89] found id: ""
	I1216 21:02:32.032834   60933 logs.go:282] 0 containers: []
	W1216 21:02:32.032843   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:32.032853   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:32.032913   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:32.069719   60933 cri.go:89] found id: ""
	I1216 21:02:32.069748   60933 logs.go:282] 0 containers: []
	W1216 21:02:32.069759   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:32.069772   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:32.069830   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:32.106652   60933 cri.go:89] found id: ""
	I1216 21:02:32.106685   60933 logs.go:282] 0 containers: []
	W1216 21:02:32.106694   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:32.106701   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:32.106767   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:32.145921   60933 cri.go:89] found id: ""
	I1216 21:02:32.145949   60933 logs.go:282] 0 containers: []
	W1216 21:02:32.145957   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:32.145963   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:32.146014   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:32.206313   60933 cri.go:89] found id: ""
	I1216 21:02:32.206342   60933 logs.go:282] 0 containers: []
	W1216 21:02:32.206351   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:32.206356   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:32.206410   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:32.262757   60933 cri.go:89] found id: ""
	I1216 21:02:32.262794   60933 logs.go:282] 0 containers: []
	W1216 21:02:32.262806   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:32.262818   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:32.262832   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:32.320221   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:32.320251   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:32.375395   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:32.375437   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:32.391103   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:32.391137   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:32.474709   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:32.474741   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:32.474757   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:31.436689   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:33.436921   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:32.320631   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:34.819726   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:34.956369   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:37.455577   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:35.058809   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:35.073074   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:35.073157   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:35.115280   60933 cri.go:89] found id: ""
	I1216 21:02:35.115305   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.115312   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:35.115318   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:35.115378   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:35.151561   60933 cri.go:89] found id: ""
	I1216 21:02:35.151589   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.151597   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:35.151603   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:35.151654   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:35.192061   60933 cri.go:89] found id: ""
	I1216 21:02:35.192088   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.192095   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:35.192111   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:35.192161   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:35.231493   60933 cri.go:89] found id: ""
	I1216 21:02:35.231523   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.231531   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:35.231538   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:35.231586   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:35.271236   60933 cri.go:89] found id: ""
	I1216 21:02:35.271291   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.271300   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:35.271306   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:35.271368   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:35.309950   60933 cri.go:89] found id: ""
	I1216 21:02:35.309980   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.309991   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:35.309999   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:35.310062   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:35.347762   60933 cri.go:89] found id: ""
	I1216 21:02:35.347790   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.347797   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:35.347803   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:35.347851   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:35.390732   60933 cri.go:89] found id: ""
	I1216 21:02:35.390757   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.390765   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:35.390774   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:35.390785   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:35.447068   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:35.447112   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:35.462873   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:35.462904   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:35.541120   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:35.541145   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:35.541162   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:35.627073   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:35.627120   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:38.170994   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:38.194371   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:38.194434   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:38.248023   60933 cri.go:89] found id: ""
	I1216 21:02:38.248050   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.248061   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:38.248069   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:38.248147   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:38.300143   60933 cri.go:89] found id: ""
	I1216 21:02:38.300175   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.300185   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:38.300193   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:38.300253   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:38.345273   60933 cri.go:89] found id: ""
	I1216 21:02:38.345300   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.345308   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:38.345314   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:38.345389   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:38.383032   60933 cri.go:89] found id: ""
	I1216 21:02:38.383066   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.383075   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:38.383081   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:38.383135   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:38.426042   60933 cri.go:89] found id: ""
	I1216 21:02:38.426074   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.426086   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:38.426094   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:38.426159   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:38.467596   60933 cri.go:89] found id: ""
	I1216 21:02:38.467625   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.467634   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:38.467640   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:38.467692   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:38.509340   60933 cri.go:89] found id: ""
	I1216 21:02:38.509380   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.509391   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:38.509399   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:38.509470   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:38.549306   60933 cri.go:89] found id: ""
	I1216 21:02:38.549337   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.549354   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:38.549365   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:38.549381   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:38.564091   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:38.564131   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:38.639173   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:38.639201   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:38.639219   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:38.716320   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:38.716376   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:38.756779   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:38.756815   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:35.437230   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:37.938595   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:36.820302   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:39.319712   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:39.954558   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:41.955761   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:41.310680   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:41.327606   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:41.327684   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:41.371622   60933 cri.go:89] found id: ""
	I1216 21:02:41.371657   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.371670   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:41.371679   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:41.371739   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:41.408149   60933 cri.go:89] found id: ""
	I1216 21:02:41.408187   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.408198   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:41.408203   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:41.408252   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:41.448445   60933 cri.go:89] found id: ""
	I1216 21:02:41.448471   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.448478   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:41.448484   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:41.448533   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:41.489957   60933 cri.go:89] found id: ""
	I1216 21:02:41.489989   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.490000   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:41.490007   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:41.490069   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:41.532891   60933 cri.go:89] found id: ""
	I1216 21:02:41.532918   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.532930   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:41.532937   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:41.532992   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:41.570315   60933 cri.go:89] found id: ""
	I1216 21:02:41.570342   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.570351   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:41.570357   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:41.570455   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:41.606833   60933 cri.go:89] found id: ""
	I1216 21:02:41.606867   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.606880   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:41.606890   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:41.606959   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:41.643862   60933 cri.go:89] found id: ""
	I1216 21:02:41.643886   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.643894   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:41.643902   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:41.643914   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:41.657621   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:41.657654   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:41.732256   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:41.732281   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:41.732295   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:41.822045   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:41.822081   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:41.863900   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:41.863933   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:40.436149   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:42.436247   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:44.436916   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:41.321155   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:43.819721   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:43.956057   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:46.455802   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:44.425154   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:44.440148   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:44.440223   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:44.478216   60933 cri.go:89] found id: ""
	I1216 21:02:44.478247   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.478258   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:44.478266   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:44.478329   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:44.517054   60933 cri.go:89] found id: ""
	I1216 21:02:44.517078   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.517084   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:44.517090   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:44.517137   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:44.554683   60933 cri.go:89] found id: ""
	I1216 21:02:44.554778   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.554801   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:44.554845   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:44.554927   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:44.600748   60933 cri.go:89] found id: ""
	I1216 21:02:44.600788   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.600800   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:44.600809   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:44.600863   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:44.637564   60933 cri.go:89] found id: ""
	I1216 21:02:44.637592   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.637600   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:44.637606   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:44.637656   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:44.676619   60933 cri.go:89] found id: ""
	I1216 21:02:44.676662   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.676674   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:44.676683   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:44.676755   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:44.715920   60933 cri.go:89] found id: ""
	I1216 21:02:44.715956   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.715964   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:44.715970   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:44.716027   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:44.755134   60933 cri.go:89] found id: ""
	I1216 21:02:44.755167   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.755179   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:44.755191   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:44.755202   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:44.796135   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:44.796164   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:44.850550   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:44.850593   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:44.865278   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:44.865305   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:44.942987   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:44.943013   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:44.943026   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:47.529850   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:47.546292   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:47.546369   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:47.589597   60933 cri.go:89] found id: ""
	I1216 21:02:47.589627   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.589640   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:47.589648   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:47.589713   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:47.630998   60933 cri.go:89] found id: ""
	I1216 21:02:47.631030   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.631043   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:47.631051   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:47.631118   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:47.670118   60933 cri.go:89] found id: ""
	I1216 21:02:47.670150   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.670162   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:47.670169   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:47.670233   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:47.714516   60933 cri.go:89] found id: ""
	I1216 21:02:47.714549   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.714560   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:47.714568   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:47.714631   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:47.752042   60933 cri.go:89] found id: ""
	I1216 21:02:47.752074   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.752086   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:47.752093   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:47.752158   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:47.793612   60933 cri.go:89] found id: ""
	I1216 21:02:47.793645   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.793656   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:47.793664   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:47.793734   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:47.833489   60933 cri.go:89] found id: ""
	I1216 21:02:47.833518   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.833529   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:47.833541   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:47.833602   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:47.869744   60933 cri.go:89] found id: ""
	I1216 21:02:47.869772   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.869783   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:47.869793   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:47.869809   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:47.910640   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:47.910674   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:47.965747   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:47.965781   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:47.979760   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:47.979786   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:48.056887   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:48.056917   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:48.056933   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:46.439409   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:48.937248   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:46.320935   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:48.820700   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:48.955697   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:50.955859   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:50.641224   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:50.657267   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:50.657346   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:50.696890   60933 cri.go:89] found id: ""
	I1216 21:02:50.696916   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.696924   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:50.696930   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:50.696993   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:50.734485   60933 cri.go:89] found id: ""
	I1216 21:02:50.734514   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.734524   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:50.734533   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:50.734598   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:50.776241   60933 cri.go:89] found id: ""
	I1216 21:02:50.776268   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.776277   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:50.776283   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:50.776358   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:50.816449   60933 cri.go:89] found id: ""
	I1216 21:02:50.816482   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.816493   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:50.816501   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:50.816561   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:50.857458   60933 cri.go:89] found id: ""
	I1216 21:02:50.857481   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.857488   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:50.857494   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:50.857556   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:50.895367   60933 cri.go:89] found id: ""
	I1216 21:02:50.895391   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.895398   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:50.895404   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:50.895466   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:50.934101   60933 cri.go:89] found id: ""
	I1216 21:02:50.934128   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.934138   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:50.934152   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:50.934212   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:50.978625   60933 cri.go:89] found id: ""
	I1216 21:02:50.978654   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.978665   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:50.978675   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:50.978688   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:51.061867   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:51.061908   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:51.101188   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:51.101228   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:51.157426   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:51.157470   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:51.172835   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:51.172882   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:51.247678   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:53.748503   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:53.763357   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:53.763425   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:53.807963   60933 cri.go:89] found id: ""
	I1216 21:02:53.807990   60933 logs.go:282] 0 containers: []
	W1216 21:02:53.807999   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:53.808005   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:53.808063   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:53.846840   60933 cri.go:89] found id: ""
	I1216 21:02:53.846867   60933 logs.go:282] 0 containers: []
	W1216 21:02:53.846876   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:53.846881   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:53.846929   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:53.885099   60933 cri.go:89] found id: ""
	I1216 21:02:53.885131   60933 logs.go:282] 0 containers: []
	W1216 21:02:53.885146   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:53.885156   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:53.885226   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:53.923859   60933 cri.go:89] found id: ""
	I1216 21:02:53.923890   60933 logs.go:282] 0 containers: []
	W1216 21:02:53.923901   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:53.923908   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:53.923972   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:53.964150   60933 cri.go:89] found id: ""
	I1216 21:02:53.964176   60933 logs.go:282] 0 containers: []
	W1216 21:02:53.964186   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:53.964201   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:53.964265   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:54.004676   60933 cri.go:89] found id: ""
	I1216 21:02:54.004707   60933 logs.go:282] 0 containers: []
	W1216 21:02:54.004718   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:54.004725   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:54.004789   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:54.042560   60933 cri.go:89] found id: ""
	I1216 21:02:54.042585   60933 logs.go:282] 0 containers: []
	W1216 21:02:54.042595   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:54.042603   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:54.042666   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:54.081002   60933 cri.go:89] found id: ""
	I1216 21:02:54.081030   60933 logs.go:282] 0 containers: []
	W1216 21:02:54.081038   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:54.081046   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:54.081058   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:54.132825   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:54.132865   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:54.147793   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:54.147821   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:54.226668   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:54.226692   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:54.226704   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:54.307792   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:54.307832   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:50.938230   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:53.436746   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:50.820949   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:53.320283   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:52.957187   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:54.958212   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:57.456612   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:56.852207   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:56.866404   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:56.866469   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:56.911786   60933 cri.go:89] found id: ""
	I1216 21:02:56.911811   60933 logs.go:282] 0 containers: []
	W1216 21:02:56.911820   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:56.911829   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:56.911886   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:56.953491   60933 cri.go:89] found id: ""
	I1216 21:02:56.953520   60933 logs.go:282] 0 containers: []
	W1216 21:02:56.953535   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:56.953543   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:56.953610   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:56.991569   60933 cri.go:89] found id: ""
	I1216 21:02:56.991605   60933 logs.go:282] 0 containers: []
	W1216 21:02:56.991616   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:56.991622   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:56.991685   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:57.026808   60933 cri.go:89] found id: ""
	I1216 21:02:57.026837   60933 logs.go:282] 0 containers: []
	W1216 21:02:57.026845   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:57.026851   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:57.026913   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:57.065539   60933 cri.go:89] found id: ""
	I1216 21:02:57.065569   60933 logs.go:282] 0 containers: []
	W1216 21:02:57.065577   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:57.065583   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:57.065642   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:57.103911   60933 cri.go:89] found id: ""
	I1216 21:02:57.103942   60933 logs.go:282] 0 containers: []
	W1216 21:02:57.103952   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:57.103960   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:57.104015   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:57.141177   60933 cri.go:89] found id: ""
	I1216 21:02:57.141200   60933 logs.go:282] 0 containers: []
	W1216 21:02:57.141207   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:57.141213   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:57.141262   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:57.178532   60933 cri.go:89] found id: ""
	I1216 21:02:57.178590   60933 logs.go:282] 0 containers: []
	W1216 21:02:57.178604   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:57.178614   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:57.178629   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:57.234811   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:57.234846   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:57.251540   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:57.251569   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:57.329029   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:57.329061   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:57.329077   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:57.412624   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:57.412665   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:55.436981   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:57.438061   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:55.819607   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:57.819648   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:59.820705   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:59.955043   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:01.956284   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:59.960422   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:59.974889   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:59.974966   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:00.012641   60933 cri.go:89] found id: ""
	I1216 21:03:00.012669   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.012676   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:00.012682   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:00.012730   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:00.053730   60933 cri.go:89] found id: ""
	I1216 21:03:00.053766   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.053778   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:00.053785   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:00.053847   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:00.091213   60933 cri.go:89] found id: ""
	I1216 21:03:00.091261   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.091274   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:00.091283   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:00.091357   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:00.131357   60933 cri.go:89] found id: ""
	I1216 21:03:00.131382   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.131390   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:00.131396   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:00.131460   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:00.168331   60933 cri.go:89] found id: ""
	I1216 21:03:00.168362   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.168373   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:00.168380   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:00.168446   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:00.208326   60933 cri.go:89] found id: ""
	I1216 21:03:00.208360   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.208369   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:00.208377   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:00.208440   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:00.245775   60933 cri.go:89] found id: ""
	I1216 21:03:00.245800   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.245808   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:00.245814   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:00.245863   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:00.283062   60933 cri.go:89] found id: ""
	I1216 21:03:00.283091   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.283100   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:00.283108   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:00.283119   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:00.358767   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:00.358787   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:00.358799   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:00.443422   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:00.443460   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:00.491511   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:00.491551   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:00.566131   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:00.566172   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:03.080319   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:03.094733   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:03.094818   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:03.132388   60933 cri.go:89] found id: ""
	I1216 21:03:03.132419   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.132428   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:03.132433   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:03.132488   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:03.172345   60933 cri.go:89] found id: ""
	I1216 21:03:03.172374   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.172386   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:03.172393   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:03.172474   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:03.210444   60933 cri.go:89] found id: ""
	I1216 21:03:03.210479   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.210488   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:03.210494   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:03.210544   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:03.248605   60933 cri.go:89] found id: ""
	I1216 21:03:03.248644   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.248656   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:03.248664   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:03.248723   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:03.286822   60933 cri.go:89] found id: ""
	I1216 21:03:03.286854   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.286862   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:03.286868   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:03.286921   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:03.329304   60933 cri.go:89] found id: ""
	I1216 21:03:03.329333   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.329344   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:03.329352   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:03.329417   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:03.367337   60933 cri.go:89] found id: ""
	I1216 21:03:03.367361   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.367368   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:03.367373   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:03.367420   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:03.409799   60933 cri.go:89] found id: ""
	I1216 21:03:03.409821   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.409829   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:03.409838   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:03.409850   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:03.466941   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:03.466976   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:03.483090   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:03.483117   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:03.566835   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:03.566860   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:03.566878   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:03.649747   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:03.649793   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:59.936221   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:01.936251   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:03.936714   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:02.319063   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:04.319653   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:03.956397   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:05.956531   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:06.193505   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:06.207797   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:06.207878   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:06.245401   60933 cri.go:89] found id: ""
	I1216 21:03:06.245437   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.245448   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:06.245456   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:06.245521   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:06.301205   60933 cri.go:89] found id: ""
	I1216 21:03:06.301239   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.301250   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:06.301257   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:06.301326   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:06.340325   60933 cri.go:89] found id: ""
	I1216 21:03:06.340352   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.340362   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:06.340369   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:06.340429   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:06.378321   60933 cri.go:89] found id: ""
	I1216 21:03:06.378351   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.378359   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:06.378365   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:06.378422   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:06.416354   60933 cri.go:89] found id: ""
	I1216 21:03:06.416390   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.416401   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:06.416409   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:06.416473   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:06.459926   60933 cri.go:89] found id: ""
	I1216 21:03:06.459955   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.459967   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:06.459975   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:06.460063   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:06.501818   60933 cri.go:89] found id: ""
	I1216 21:03:06.501849   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.501860   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:06.501866   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:06.501926   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:06.537552   60933 cri.go:89] found id: ""
	I1216 21:03:06.537583   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.537598   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:06.537607   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:06.537621   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:06.592170   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:06.592212   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:06.607148   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:06.607183   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:06.676114   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:06.676140   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:06.676151   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:06.756009   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:06.756052   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:09.298166   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:09.313104   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:09.313189   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:09.356598   60933 cri.go:89] found id: ""
	I1216 21:03:09.356625   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.356640   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:09.356649   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:09.356715   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:05.937241   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:07.938858   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:06.322260   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:08.818974   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:08.455838   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:10.955332   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:09.395406   60933 cri.go:89] found id: ""
	I1216 21:03:09.395439   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.395449   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:09.395456   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:09.395521   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:09.440401   60933 cri.go:89] found id: ""
	I1216 21:03:09.440423   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.440430   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:09.440435   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:09.440504   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:09.478798   60933 cri.go:89] found id: ""
	I1216 21:03:09.478828   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.478843   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:09.478853   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:09.478921   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:09.515542   60933 cri.go:89] found id: ""
	I1216 21:03:09.515575   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.515587   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:09.515596   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:09.515654   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:09.554150   60933 cri.go:89] found id: ""
	I1216 21:03:09.554183   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.554194   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:09.554205   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:09.554279   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:09.591699   60933 cri.go:89] found id: ""
	I1216 21:03:09.591730   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.591740   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:09.591747   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:09.591811   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:09.629938   60933 cri.go:89] found id: ""
	I1216 21:03:09.629970   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.629980   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:09.629991   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:09.630008   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:09.711255   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:09.711284   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:09.711300   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:09.790202   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:09.790243   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:09.839567   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:09.839597   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:09.893010   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:09.893050   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:12.409934   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:12.423715   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:12.423789   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:12.461995   60933 cri.go:89] found id: ""
	I1216 21:03:12.462038   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.462046   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:12.462052   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:12.462101   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:12.501738   60933 cri.go:89] found id: ""
	I1216 21:03:12.501769   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.501779   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:12.501785   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:12.501833   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:12.541758   60933 cri.go:89] found id: ""
	I1216 21:03:12.541785   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.541795   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:12.541802   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:12.541850   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:12.579173   60933 cri.go:89] found id: ""
	I1216 21:03:12.579199   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.579206   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:12.579212   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:12.579302   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:12.624382   60933 cri.go:89] found id: ""
	I1216 21:03:12.624407   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.624418   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:12.624426   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:12.624488   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:12.665139   60933 cri.go:89] found id: ""
	I1216 21:03:12.665178   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.665190   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:12.665200   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:12.665274   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:12.711586   60933 cri.go:89] found id: ""
	I1216 21:03:12.711611   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.711619   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:12.711627   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:12.711678   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:12.761566   60933 cri.go:89] found id: ""
	I1216 21:03:12.761600   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.761612   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:12.761624   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:12.761640   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:12.824282   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:12.824315   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:12.839335   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:12.839371   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:12.918317   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:12.918341   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:12.918357   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:13.000375   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:13.000410   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:10.438136   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:12.936742   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:11.319284   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:13.320036   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:15.322965   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:12.955450   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:14.956186   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:16.956603   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:15.542372   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:15.556877   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:15.556960   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:15.599345   60933 cri.go:89] found id: ""
	I1216 21:03:15.599378   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.599389   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:15.599414   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:15.599479   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:15.642072   60933 cri.go:89] found id: ""
	I1216 21:03:15.642106   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.642116   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:15.642124   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:15.642189   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:15.679989   60933 cri.go:89] found id: ""
	I1216 21:03:15.680025   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.680036   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:15.680044   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:15.680103   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:15.718343   60933 cri.go:89] found id: ""
	I1216 21:03:15.718371   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.718378   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:15.718384   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:15.718433   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:15.759937   60933 cri.go:89] found id: ""
	I1216 21:03:15.759971   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.759981   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:15.759988   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:15.760081   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:15.801434   60933 cri.go:89] found id: ""
	I1216 21:03:15.801463   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.801471   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:15.801477   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:15.801540   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:15.841855   60933 cri.go:89] found id: ""
	I1216 21:03:15.841879   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.841886   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:15.841892   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:15.841962   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:15.883951   60933 cri.go:89] found id: ""
	I1216 21:03:15.883974   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.883982   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:15.883990   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:15.884004   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:15.960868   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:15.960902   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:16.005700   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:16.005730   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:16.061128   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:16.061165   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:16.075601   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:16.075630   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:16.147810   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:18.648677   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:18.663298   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:18.663367   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:18.713281   60933 cri.go:89] found id: ""
	I1216 21:03:18.713313   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.713324   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:18.713332   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:18.713396   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:18.764861   60933 cri.go:89] found id: ""
	I1216 21:03:18.764892   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.764905   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:18.764912   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:18.764978   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:18.816140   60933 cri.go:89] found id: ""
	I1216 21:03:18.816170   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.816180   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:18.816188   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:18.816251   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:18.852118   60933 cri.go:89] found id: ""
	I1216 21:03:18.852151   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.852163   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:18.852171   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:18.852235   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:18.887996   60933 cri.go:89] found id: ""
	I1216 21:03:18.888018   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.888025   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:18.888031   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:18.888089   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:18.925415   60933 cri.go:89] found id: ""
	I1216 21:03:18.925437   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.925445   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:18.925451   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:18.925498   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:18.964853   60933 cri.go:89] found id: ""
	I1216 21:03:18.964884   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.964892   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:18.964897   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:18.964964   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:19.000822   60933 cri.go:89] found id: ""
	I1216 21:03:19.000848   60933 logs.go:282] 0 containers: []
	W1216 21:03:19.000856   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:19.000865   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:19.000879   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:19.051571   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:19.051612   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:19.066737   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:19.066767   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:19.143120   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:19.143144   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:19.143156   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:19.229811   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:19.229850   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:15.437189   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:17.439345   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:17.820374   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:19.820460   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:19.455707   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:21.955275   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:21.776440   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:21.792869   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:21.792951   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:21.831100   60933 cri.go:89] found id: ""
	I1216 21:03:21.831127   60933 logs.go:282] 0 containers: []
	W1216 21:03:21.831134   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:21.831140   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:21.831196   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:21.869124   60933 cri.go:89] found id: ""
	I1216 21:03:21.869147   60933 logs.go:282] 0 containers: []
	W1216 21:03:21.869155   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:21.869160   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:21.869215   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:21.909891   60933 cri.go:89] found id: ""
	I1216 21:03:21.909926   60933 logs.go:282] 0 containers: []
	W1216 21:03:21.909938   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:21.909946   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:21.910032   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:21.949140   60933 cri.go:89] found id: ""
	I1216 21:03:21.949169   60933 logs.go:282] 0 containers: []
	W1216 21:03:21.949179   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:21.949186   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:21.949245   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:21.987741   60933 cri.go:89] found id: ""
	I1216 21:03:21.987771   60933 logs.go:282] 0 containers: []
	W1216 21:03:21.987780   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:21.987785   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:21.987839   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:22.025565   60933 cri.go:89] found id: ""
	I1216 21:03:22.025593   60933 logs.go:282] 0 containers: []
	W1216 21:03:22.025601   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:22.025607   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:22.025659   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:22.062076   60933 cri.go:89] found id: ""
	I1216 21:03:22.062110   60933 logs.go:282] 0 containers: []
	W1216 21:03:22.062120   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:22.062127   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:22.062198   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:22.102037   60933 cri.go:89] found id: ""
	I1216 21:03:22.102065   60933 logs.go:282] 0 containers: []
	W1216 21:03:22.102093   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:22.102105   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:22.102122   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:22.159185   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:22.159219   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:22.175139   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:22.175168   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:22.255769   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:22.255801   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:22.255817   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:22.339633   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:22.339681   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:19.937328   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:22.435709   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:24.436704   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:22.319227   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:24.819278   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:24.455668   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:26.956382   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:24.883865   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:24.898198   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:24.898287   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:24.939472   60933 cri.go:89] found id: ""
	I1216 21:03:24.939500   60933 logs.go:282] 0 containers: []
	W1216 21:03:24.939511   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:24.939518   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:24.939583   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:24.981798   60933 cri.go:89] found id: ""
	I1216 21:03:24.981822   60933 logs.go:282] 0 containers: []
	W1216 21:03:24.981829   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:24.981834   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:24.981889   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:25.021332   60933 cri.go:89] found id: ""
	I1216 21:03:25.021366   60933 logs.go:282] 0 containers: []
	W1216 21:03:25.021373   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:25.021379   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:25.021431   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:25.057811   60933 cri.go:89] found id: ""
	I1216 21:03:25.057836   60933 logs.go:282] 0 containers: []
	W1216 21:03:25.057843   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:25.057848   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:25.057907   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:25.093852   60933 cri.go:89] found id: ""
	I1216 21:03:25.093881   60933 logs.go:282] 0 containers: []
	W1216 21:03:25.093890   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:25.093895   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:25.093945   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:25.132779   60933 cri.go:89] found id: ""
	I1216 21:03:25.132813   60933 logs.go:282] 0 containers: []
	W1216 21:03:25.132825   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:25.132834   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:25.132912   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:25.173942   60933 cri.go:89] found id: ""
	I1216 21:03:25.173967   60933 logs.go:282] 0 containers: []
	W1216 21:03:25.173974   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:25.173990   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:25.174048   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:25.213105   60933 cri.go:89] found id: ""
	I1216 21:03:25.213127   60933 logs.go:282] 0 containers: []
	W1216 21:03:25.213135   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:25.213144   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:25.213155   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:25.267517   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:25.267557   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:25.284144   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:25.284177   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:25.362901   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:25.362931   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:25.362947   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:25.450193   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:25.450227   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:27.995716   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:28.012044   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:28.012138   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:28.050404   60933 cri.go:89] found id: ""
	I1216 21:03:28.050432   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.050441   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:28.050446   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:28.050492   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:28.087830   60933 cri.go:89] found id: ""
	I1216 21:03:28.087855   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.087862   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:28.087885   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:28.087933   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:28.125122   60933 cri.go:89] found id: ""
	I1216 21:03:28.125147   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.125154   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:28.125160   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:28.125233   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:28.160619   60933 cri.go:89] found id: ""
	I1216 21:03:28.160646   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.160655   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:28.160661   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:28.160726   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:28.198951   60933 cri.go:89] found id: ""
	I1216 21:03:28.198977   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.198986   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:28.198993   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:28.199059   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:28.236596   60933 cri.go:89] found id: ""
	I1216 21:03:28.236621   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.236629   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:28.236635   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:28.236707   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:28.273955   60933 cri.go:89] found id: ""
	I1216 21:03:28.273979   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.273986   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:28.273992   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:28.274061   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:28.311908   60933 cri.go:89] found id: ""
	I1216 21:03:28.311943   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.311954   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:28.311965   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:28.311979   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:28.363870   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:28.363910   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:28.379919   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:28.379945   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:28.459998   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:28.460019   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:28.460030   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:28.543229   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:28.543306   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:26.936661   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:29.437169   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:26.820563   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:29.319981   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:28.956791   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:31.456708   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:31.086525   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:31.100833   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:31.100950   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:31.141356   60933 cri.go:89] found id: ""
	I1216 21:03:31.141385   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.141396   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:31.141403   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:31.141465   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:31.176609   60933 cri.go:89] found id: ""
	I1216 21:03:31.176641   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.176650   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:31.176657   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:31.176721   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:31.213959   60933 cri.go:89] found id: ""
	I1216 21:03:31.213984   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.213991   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:31.213997   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:31.214058   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:31.255183   60933 cri.go:89] found id: ""
	I1216 21:03:31.255208   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.255215   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:31.255220   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:31.255297   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:31.293475   60933 cri.go:89] found id: ""
	I1216 21:03:31.293501   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.293508   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:31.293514   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:31.293561   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:31.332010   60933 cri.go:89] found id: ""
	I1216 21:03:31.332041   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.332052   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:31.332061   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:31.332119   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:31.370301   60933 cri.go:89] found id: ""
	I1216 21:03:31.370331   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.370342   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:31.370349   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:31.370414   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:31.419526   60933 cri.go:89] found id: ""
	I1216 21:03:31.419553   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.419561   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:31.419570   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:31.419583   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:31.480125   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:31.480160   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:31.495464   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:31.495497   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:31.570747   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:31.570773   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:31.570788   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:31.651521   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:31.651564   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:34.200969   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:34.216519   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:34.216596   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:34.254185   60933 cri.go:89] found id: ""
	I1216 21:03:34.254218   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.254227   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:34.254242   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:34.254312   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:34.293194   60933 cri.go:89] found id: ""
	I1216 21:03:34.293225   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.293236   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:34.293242   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:34.293297   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:34.335002   60933 cri.go:89] found id: ""
	I1216 21:03:34.335030   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.335042   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:34.335050   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:34.335112   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:34.370854   60933 cri.go:89] found id: ""
	I1216 21:03:34.370880   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.370887   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:34.370893   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:34.370938   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:31.439597   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:33.935941   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:31.820337   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:33.820497   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:33.955185   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:36.455713   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:34.409155   60933 cri.go:89] found id: ""
	I1216 21:03:34.409181   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.409189   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:34.409195   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:34.409256   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:34.448555   60933 cri.go:89] found id: ""
	I1216 21:03:34.448583   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.448594   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:34.448601   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:34.448663   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:34.486800   60933 cri.go:89] found id: ""
	I1216 21:03:34.486829   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.486842   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:34.486851   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:34.486919   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:34.530274   60933 cri.go:89] found id: ""
	I1216 21:03:34.530299   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.530307   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:34.530317   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:34.530335   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:34.601587   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:34.601620   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:34.601637   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:34.680215   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:34.680250   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:34.721362   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:34.721389   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:34.776652   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:34.776693   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:37.292877   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:37.306976   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:37.307060   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:37.349370   60933 cri.go:89] found id: ""
	I1216 21:03:37.349405   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.349416   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:37.349424   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:37.349486   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:37.387213   60933 cri.go:89] found id: ""
	I1216 21:03:37.387271   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.387285   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:37.387294   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:37.387361   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:37.427138   60933 cri.go:89] found id: ""
	I1216 21:03:37.427164   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.427175   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:37.427182   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:37.427269   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:37.466751   60933 cri.go:89] found id: ""
	I1216 21:03:37.466776   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.466783   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:37.466788   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:37.466846   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:37.505078   60933 cri.go:89] found id: ""
	I1216 21:03:37.505115   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.505123   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:37.505128   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:37.505189   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:37.548642   60933 cri.go:89] found id: ""
	I1216 21:03:37.548665   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.548673   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:37.548679   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:37.548738   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:37.592354   60933 cri.go:89] found id: ""
	I1216 21:03:37.592379   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.592386   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:37.592391   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:37.592441   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:37.631179   60933 cri.go:89] found id: ""
	I1216 21:03:37.631212   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.631221   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:37.631230   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:37.631261   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:37.683021   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:37.683062   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:37.698056   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:37.698087   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:37.774368   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:37.774397   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:37.774422   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:37.860470   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:37.860511   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:35.936409   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:37.936652   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:36.319436   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:38.819727   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:38.456251   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:40.957354   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:40.405278   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:40.420390   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:40.420473   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:40.463963   60933 cri.go:89] found id: ""
	I1216 21:03:40.463994   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.464033   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:40.464041   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:40.464107   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:40.510321   60933 cri.go:89] found id: ""
	I1216 21:03:40.510352   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.510369   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:40.510376   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:40.510441   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:40.546580   60933 cri.go:89] found id: ""
	I1216 21:03:40.546610   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.546619   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:40.546624   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:40.546686   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:40.583109   60933 cri.go:89] found id: ""
	I1216 21:03:40.583136   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.583144   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:40.583149   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:40.583202   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:40.628747   60933 cri.go:89] found id: ""
	I1216 21:03:40.628771   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.628778   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:40.628784   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:40.628845   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:40.663757   60933 cri.go:89] found id: ""
	I1216 21:03:40.663785   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.663796   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:40.663804   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:40.663867   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:40.703483   60933 cri.go:89] found id: ""
	I1216 21:03:40.703513   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.703522   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:40.703528   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:40.703592   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:40.742585   60933 cri.go:89] found id: ""
	I1216 21:03:40.742622   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.742632   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:40.742641   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:40.742653   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:40.757771   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:40.757809   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:40.837615   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:40.837642   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:40.837656   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:40.915403   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:40.915442   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:40.960762   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:40.960790   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:43.515302   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:43.530831   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:43.530906   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:43.571680   60933 cri.go:89] found id: ""
	I1216 21:03:43.571704   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.571712   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:43.571718   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:43.571779   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:43.615912   60933 cri.go:89] found id: ""
	I1216 21:03:43.615940   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.615948   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:43.615955   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:43.616013   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:43.654206   60933 cri.go:89] found id: ""
	I1216 21:03:43.654231   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.654241   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:43.654249   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:43.654309   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:43.690509   60933 cri.go:89] found id: ""
	I1216 21:03:43.690533   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.690541   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:43.690548   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:43.690595   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:43.728601   60933 cri.go:89] found id: ""
	I1216 21:03:43.728627   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.728634   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:43.728639   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:43.728685   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:43.769092   60933 cri.go:89] found id: ""
	I1216 21:03:43.769130   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.769198   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:43.769215   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:43.769292   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:43.812492   60933 cri.go:89] found id: ""
	I1216 21:03:43.812525   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.812537   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:43.812544   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:43.812613   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:43.852748   60933 cri.go:89] found id: ""
	I1216 21:03:43.852778   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.852787   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:43.852795   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:43.852807   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:43.907800   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:43.907839   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:43.922806   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:43.922833   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:44.002511   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:44.002538   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:44.002551   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:44.081760   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:44.081801   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:40.437134   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:42.437214   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:40.820244   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:43.321298   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:43.455891   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:45.456281   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:46.625868   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:46.640266   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:46.640341   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:46.677137   60933 cri.go:89] found id: ""
	I1216 21:03:46.677168   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.677179   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:46.677185   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:46.677241   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:46.714340   60933 cri.go:89] found id: ""
	I1216 21:03:46.714373   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.714382   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:46.714389   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:46.714449   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:46.752713   60933 cri.go:89] found id: ""
	I1216 21:03:46.752743   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.752754   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:46.752763   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:46.752827   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:46.790787   60933 cri.go:89] found id: ""
	I1216 21:03:46.790821   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.790837   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:46.790845   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:46.790902   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:46.827905   60933 cri.go:89] found id: ""
	I1216 21:03:46.827934   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.827945   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:46.827954   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:46.828023   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:46.863522   60933 cri.go:89] found id: ""
	I1216 21:03:46.863547   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.863563   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:46.863570   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:46.863634   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:46.906005   60933 cri.go:89] found id: ""
	I1216 21:03:46.906035   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.906044   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:46.906049   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:46.906103   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:46.947639   60933 cri.go:89] found id: ""
	I1216 21:03:46.947668   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.947679   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:46.947691   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:46.947706   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:47.001693   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:47.001732   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:47.023122   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:47.023166   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:47.108257   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:47.108291   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:47.108303   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:47.184768   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:47.184807   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:44.940074   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:47.437155   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:45.819943   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:47.820443   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:49.820700   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:47.955794   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:49.960595   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:52.455630   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:49.729433   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:49.743836   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:49.743903   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:49.783021   60933 cri.go:89] found id: ""
	I1216 21:03:49.783054   60933 logs.go:282] 0 containers: []
	W1216 21:03:49.783066   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:49.783074   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:49.783144   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:49.820371   60933 cri.go:89] found id: ""
	I1216 21:03:49.820399   60933 logs.go:282] 0 containers: []
	W1216 21:03:49.820409   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:49.820416   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:49.820476   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:49.857918   60933 cri.go:89] found id: ""
	I1216 21:03:49.857948   60933 logs.go:282] 0 containers: []
	W1216 21:03:49.857959   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:49.857967   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:49.858033   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:49.899517   60933 cri.go:89] found id: ""
	I1216 21:03:49.899548   60933 logs.go:282] 0 containers: []
	W1216 21:03:49.899558   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:49.899565   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:49.899632   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:49.938771   60933 cri.go:89] found id: ""
	I1216 21:03:49.938797   60933 logs.go:282] 0 containers: []
	W1216 21:03:49.938805   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:49.938810   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:49.938857   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:49.975748   60933 cri.go:89] found id: ""
	I1216 21:03:49.975781   60933 logs.go:282] 0 containers: []
	W1216 21:03:49.975792   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:49.975800   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:49.975876   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:50.013057   60933 cri.go:89] found id: ""
	I1216 21:03:50.013082   60933 logs.go:282] 0 containers: []
	W1216 21:03:50.013090   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:50.013127   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:50.013178   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:50.049106   60933 cri.go:89] found id: ""
	I1216 21:03:50.049138   60933 logs.go:282] 0 containers: []
	W1216 21:03:50.049150   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:50.049161   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:50.049176   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:50.063815   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:50.063847   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:50.137801   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:50.137826   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:50.137841   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:50.218456   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:50.218495   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:50.263347   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:50.263379   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:52.824077   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:52.838096   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:52.838185   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:52.880550   60933 cri.go:89] found id: ""
	I1216 21:03:52.880582   60933 logs.go:282] 0 containers: []
	W1216 21:03:52.880593   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:52.880600   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:52.880658   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:52.919728   60933 cri.go:89] found id: ""
	I1216 21:03:52.919751   60933 logs.go:282] 0 containers: []
	W1216 21:03:52.919759   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:52.919764   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:52.919819   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:52.957519   60933 cri.go:89] found id: ""
	I1216 21:03:52.957542   60933 logs.go:282] 0 containers: []
	W1216 21:03:52.957549   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:52.957555   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:52.957607   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:52.996631   60933 cri.go:89] found id: ""
	I1216 21:03:52.996663   60933 logs.go:282] 0 containers: []
	W1216 21:03:52.996673   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:52.996681   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:52.996745   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:53.059902   60933 cri.go:89] found id: ""
	I1216 21:03:53.060014   60933 logs.go:282] 0 containers: []
	W1216 21:03:53.060030   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:53.060039   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:53.060105   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:53.099367   60933 cri.go:89] found id: ""
	I1216 21:03:53.099395   60933 logs.go:282] 0 containers: []
	W1216 21:03:53.099406   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:53.099419   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:53.099486   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:53.140668   60933 cri.go:89] found id: ""
	I1216 21:03:53.140696   60933 logs.go:282] 0 containers: []
	W1216 21:03:53.140704   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:53.140709   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:53.140777   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:53.179182   60933 cri.go:89] found id: ""
	I1216 21:03:53.179208   60933 logs.go:282] 0 containers: []
	W1216 21:03:53.179216   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:53.179225   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:53.179236   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:53.233441   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:53.233481   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:53.247526   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:53.247569   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:53.321868   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:53.321895   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:53.321911   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:53.410904   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:53.410959   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:49.936523   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:51.936955   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:54.441538   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:52.319658   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:54.319887   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:54.955490   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:57.456080   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:55.954371   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:55.968506   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:55.968570   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:56.005087   60933 cri.go:89] found id: ""
	I1216 21:03:56.005118   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.005130   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:56.005137   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:56.005205   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:56.039443   60933 cri.go:89] found id: ""
	I1216 21:03:56.039467   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.039475   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:56.039486   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:56.039537   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:56.078181   60933 cri.go:89] found id: ""
	I1216 21:03:56.078213   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.078224   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:56.078231   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:56.078289   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:56.115809   60933 cri.go:89] found id: ""
	I1216 21:03:56.115833   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.115841   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:56.115848   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:56.115901   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:56.154299   60933 cri.go:89] found id: ""
	I1216 21:03:56.154323   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.154330   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:56.154336   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:56.154395   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:56.193069   60933 cri.go:89] found id: ""
	I1216 21:03:56.193098   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.193106   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:56.193112   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:56.193161   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:56.231067   60933 cri.go:89] found id: ""
	I1216 21:03:56.231099   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.231118   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:56.231125   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:56.231191   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:56.270980   60933 cri.go:89] found id: ""
	I1216 21:03:56.271011   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.271022   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:56.271035   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:56.271050   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:56.321374   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:56.321405   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:56.336802   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:56.336847   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:56.414052   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:56.414078   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:56.414091   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:56.499118   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:56.499158   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:59.049386   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:59.063191   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:59.063300   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:59.102136   60933 cri.go:89] found id: ""
	I1216 21:03:59.102169   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.102180   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:59.102187   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:59.102255   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:59.138311   60933 cri.go:89] found id: ""
	I1216 21:03:59.138340   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.138357   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:59.138364   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:59.138431   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:59.176131   60933 cri.go:89] found id: ""
	I1216 21:03:59.176159   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.176169   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:59.176177   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:59.176259   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:59.214274   60933 cri.go:89] found id: ""
	I1216 21:03:59.214308   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.214320   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:59.214327   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:59.214397   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:59.254499   60933 cri.go:89] found id: ""
	I1216 21:03:59.254524   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.254531   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:59.254537   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:59.254602   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:59.292715   60933 cri.go:89] found id: ""
	I1216 21:03:59.292755   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.292765   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:59.292772   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:59.292836   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:59.333279   60933 cri.go:89] found id: ""
	I1216 21:03:59.333314   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.333325   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:59.333332   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:59.333404   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:59.372071   60933 cri.go:89] found id: ""
	I1216 21:03:59.372104   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.372116   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:59.372126   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:59.372143   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:59.389021   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:59.389051   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 21:03:56.936508   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:59.438217   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:56.323300   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:58.819599   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:59.456242   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:01.956873   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	W1216 21:03:59.503281   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:59.503304   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:59.503316   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:59.581761   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:59.581797   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:59.627604   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:59.627646   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:04:02.179425   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:04:02.195786   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:04:02.195850   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:04:02.239763   60933 cri.go:89] found id: ""
	I1216 21:04:02.239790   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.239801   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:04:02.239809   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:04:02.239873   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:04:02.278885   60933 cri.go:89] found id: ""
	I1216 21:04:02.278914   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.278926   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:04:02.278935   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:04:02.279004   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:04:02.320701   60933 cri.go:89] found id: ""
	I1216 21:04:02.320731   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.320742   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:04:02.320749   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:04:02.320811   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:04:02.357726   60933 cri.go:89] found id: ""
	I1216 21:04:02.357757   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.357767   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:04:02.357773   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:04:02.357826   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:04:02.399577   60933 cri.go:89] found id: ""
	I1216 21:04:02.399609   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.399618   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:04:02.399624   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:04:02.399687   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:04:02.445559   60933 cri.go:89] found id: ""
	I1216 21:04:02.445590   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.445600   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:04:02.445607   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:04:02.445670   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:04:02.482983   60933 cri.go:89] found id: ""
	I1216 21:04:02.483015   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.483027   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:04:02.483035   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:04:02.483116   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:04:02.523028   60933 cri.go:89] found id: ""
	I1216 21:04:02.523055   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.523063   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:04:02.523073   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:04:02.523084   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:04:02.577447   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:04:02.577487   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:04:02.594539   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:04:02.594567   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:04:02.683805   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:04:02.683832   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:04:02.683848   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:04:02.763377   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:04:02.763416   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:04:01.937214   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:04.436771   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:01.319860   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:03.320323   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:04.454654   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:06.456145   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:05.311029   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:04:05.328358   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:04:05.328438   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:04:05.367378   60933 cri.go:89] found id: ""
	I1216 21:04:05.367402   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.367409   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:04:05.367419   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:04:05.367468   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:04:05.406268   60933 cri.go:89] found id: ""
	I1216 21:04:05.406291   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.406301   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:04:05.406306   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:04:05.406353   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:04:05.444737   60933 cri.go:89] found id: ""
	I1216 21:04:05.444767   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.444778   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:04:05.444787   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:04:05.444836   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:04:05.484044   60933 cri.go:89] found id: ""
	I1216 21:04:05.484132   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.484153   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:04:05.484161   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:04:05.484222   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:04:05.523395   60933 cri.go:89] found id: ""
	I1216 21:04:05.523420   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.523431   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:04:05.523439   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:04:05.523501   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:04:05.566925   60933 cri.go:89] found id: ""
	I1216 21:04:05.566954   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.566967   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:04:05.566974   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:04:05.567036   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:04:05.611275   60933 cri.go:89] found id: ""
	I1216 21:04:05.611303   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.611314   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:04:05.611321   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:04:05.611396   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:04:05.650340   60933 cri.go:89] found id: ""
	I1216 21:04:05.650371   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.650379   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:04:05.650389   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:04:05.650400   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:04:05.702277   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:04:05.702321   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:04:05.718685   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:04:05.718713   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:04:05.794979   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:04:05.795005   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:04:05.795020   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:04:05.897348   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:04:05.897383   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:04:08.447268   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:04:08.462553   60933 kubeadm.go:597] duration metric: took 4m2.545161532s to restartPrimaryControlPlane
	W1216 21:04:08.462621   60933 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 21:04:08.462650   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 21:04:06.437699   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:08.936904   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:05.813413   60829 pod_ready.go:82] duration metric: took 4m0.000648161s for pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace to be "Ready" ...
	E1216 21:04:05.813448   60829 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace to be "Ready" (will not retry!)
	I1216 21:04:05.813472   60829 pod_ready.go:39] duration metric: took 4m14.577422135s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:04:05.813498   60829 kubeadm.go:597] duration metric: took 4m22.010606819s to restartPrimaryControlPlane
	W1216 21:04:05.813559   60829 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 21:04:05.813593   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 21:04:10.315541   60933 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.85286561s)
	I1216 21:04:10.315622   60933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:04:10.330937   60933 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 21:04:10.343702   60933 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:04:10.356498   60933 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:04:10.356526   60933 kubeadm.go:157] found existing configuration files:
	
	I1216 21:04:10.356579   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 21:04:10.367777   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:04:10.367847   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:04:10.379109   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 21:04:10.389258   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:04:10.389313   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:04:10.399959   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 21:04:10.410664   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:04:10.410734   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:04:10.423138   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 21:04:10.433922   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:04:10.433976   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:04:10.445297   60933 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 21:04:10.524236   60933 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1216 21:04:10.524344   60933 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 21:04:10.680331   60933 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 21:04:10.680489   60933 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 21:04:10.680641   60933 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1216 21:04:10.877305   60933 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 21:04:10.879375   60933 out.go:235]   - Generating certificates and keys ...
	I1216 21:04:10.879496   60933 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 21:04:10.879567   60933 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 21:04:10.879647   60933 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 21:04:10.879748   60933 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 21:04:10.879865   60933 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 21:04:10.880127   60933 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 21:04:10.881047   60933 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 21:04:10.881874   60933 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 21:04:10.882778   60933 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 21:04:10.883678   60933 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 21:04:10.884029   60933 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 21:04:10.884130   60933 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 21:04:11.034011   60933 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 21:04:11.273509   60933 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 21:04:11.477553   60933 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 21:04:11.542158   60933 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 21:04:11.565791   60933 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 21:04:11.567317   60933 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 21:04:11.567409   60933 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 21:04:11.763223   60933 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 21:04:08.955135   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:10.957061   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:11.766107   60933 out.go:235]   - Booting up control plane ...
	I1216 21:04:11.766257   60933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 21:04:11.766367   60933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 21:04:11.768484   60933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 21:04:11.773601   60933 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 21:04:11.780554   60933 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1216 21:04:11.436931   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:13.437532   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:13.455175   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:15.455370   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:17.456801   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:15.936107   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:17.937233   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:17.949449   60421 pod_ready.go:82] duration metric: took 4m0.000885381s for pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace to be "Ready" ...
	E1216 21:04:17.949484   60421 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace to be "Ready" (will not retry!)
	I1216 21:04:17.949501   60421 pod_ready.go:39] duration metric: took 4m10.554596731s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:04:17.949525   60421 kubeadm.go:597] duration metric: took 4m42.414672113s to restartPrimaryControlPlane
	W1216 21:04:17.949588   60421 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 21:04:17.949619   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 21:04:19.938104   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:22.436710   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:24.936550   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:26.936809   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:29.437478   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:33.833179   60829 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (28.019561403s)
	I1216 21:04:33.833265   60829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:04:33.850170   60829 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 21:04:33.862112   60829 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:04:33.873752   60829 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:04:33.873777   60829 kubeadm.go:157] found existing configuration files:
	
	I1216 21:04:33.873832   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1216 21:04:33.885038   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:04:33.885115   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:04:33.897352   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1216 21:04:33.911055   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:04:33.911115   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:04:33.925077   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1216 21:04:33.938925   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:04:33.938997   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:04:33.952022   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1216 21:04:33.963099   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:04:33.963176   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:04:33.974080   60829 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 21:04:34.031525   60829 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I1216 21:04:34.031643   60829 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 21:04:34.153173   60829 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 21:04:34.153340   60829 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 21:04:34.153453   60829 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 21:04:34.166258   60829 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 21:04:31.936620   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:33.938157   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:34.168275   60829 out.go:235]   - Generating certificates and keys ...
	I1216 21:04:34.168388   60829 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 21:04:34.168463   60829 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 21:04:34.168545   60829 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 21:04:34.168633   60829 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 21:04:34.168740   60829 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 21:04:34.168837   60829 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 21:04:34.168934   60829 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 21:04:34.169020   60829 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 21:04:34.169119   60829 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 21:04:34.169222   60829 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 21:04:34.169278   60829 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 21:04:34.169365   60829 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 21:04:34.277660   60829 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 21:04:34.526364   60829 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 21:04:34.629728   60829 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 21:04:34.757824   60829 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 21:04:34.838922   60829 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 21:04:34.839431   60829 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 21:04:34.841925   60829 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 21:04:34.843761   60829 out.go:235]   - Booting up control plane ...
	I1216 21:04:34.843874   60829 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 21:04:34.843945   60829 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 21:04:34.846919   60829 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 21:04:34.866038   60829 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 21:04:34.875031   60829 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 21:04:34.875112   60829 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 21:04:35.016713   60829 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 21:04:35.016879   60829 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 21:04:36.437043   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:38.437584   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:36.017947   60829 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001159452s
	I1216 21:04:36.018086   60829 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1216 21:04:40.519460   60829 kubeadm.go:310] [api-check] The API server is healthy after 4.501460025s
	I1216 21:04:40.533680   60829 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 21:04:40.552611   60829 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 21:04:40.585691   60829 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 21:04:40.585905   60829 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-327790 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 21:04:40.613752   60829 kubeadm.go:310] [bootstrap-token] Using token: w829op.p4bszg1q76emsxit
	I1216 21:04:40.615428   60829 out.go:235]   - Configuring RBAC rules ...
	I1216 21:04:40.615556   60829 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 21:04:40.629296   60829 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 21:04:40.638449   60829 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 21:04:40.644143   60829 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 21:04:40.648665   60829 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 21:04:40.653151   60829 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 21:04:40.926399   60829 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 21:04:41.370569   60829 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1216 21:04:41.927555   60829 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1216 21:04:41.928692   60829 kubeadm.go:310] 
	I1216 21:04:41.928769   60829 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1216 21:04:41.928779   60829 kubeadm.go:310] 
	I1216 21:04:41.928851   60829 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1216 21:04:41.928878   60829 kubeadm.go:310] 
	I1216 21:04:41.928928   60829 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1216 21:04:41.929005   60829 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 21:04:41.929053   60829 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 21:04:41.929060   60829 kubeadm.go:310] 
	I1216 21:04:41.929107   60829 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1216 21:04:41.929114   60829 kubeadm.go:310] 
	I1216 21:04:41.929151   60829 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 21:04:41.929157   60829 kubeadm.go:310] 
	I1216 21:04:41.929205   60829 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1216 21:04:41.929264   60829 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 21:04:41.929325   60829 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 21:04:41.929354   60829 kubeadm.go:310] 
	I1216 21:04:41.929527   60829 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 21:04:41.929657   60829 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1216 21:04:41.929674   60829 kubeadm.go:310] 
	I1216 21:04:41.929787   60829 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token w829op.p4bszg1q76emsxit \
	I1216 21:04:41.929941   60829 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e03b60b144334bf383a3d22daeca854a6b4004373f1847ba3afcb85a998b5735 \
	I1216 21:04:41.929975   60829 kubeadm.go:310] 	--control-plane 
	I1216 21:04:41.929984   60829 kubeadm.go:310] 
	I1216 21:04:41.930103   60829 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1216 21:04:41.930124   60829 kubeadm.go:310] 
	I1216 21:04:41.930245   60829 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token w829op.p4bszg1q76emsxit \
	I1216 21:04:41.930378   60829 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e03b60b144334bf383a3d22daeca854a6b4004373f1847ba3afcb85a998b5735 
	I1216 21:04:41.931554   60829 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 21:04:41.931685   60829 cni.go:84] Creating CNI manager for ""
	I1216 21:04:41.931699   60829 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:04:41.933748   60829 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 21:04:40.937882   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:43.436864   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:41.935317   60829 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 21:04:41.947502   60829 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 21:04:41.976180   60829 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 21:04:41.976288   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:41.976323   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-327790 minikube.k8s.io/updated_at=2024_12_16T21_04_41_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=74e51ab701402ddc00f8ba70f2a2775c7dcd6477 minikube.k8s.io/name=default-k8s-diff-port-327790 minikube.k8s.io/primary=true
	I1216 21:04:42.010154   60829 ops.go:34] apiserver oom_adj: -16
	I1216 21:04:42.181919   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:42.682201   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:43.182557   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:43.682418   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:44.182318   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:44.682793   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:45.182342   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:45.682678   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:45.777484   60829 kubeadm.go:1113] duration metric: took 3.801254961s to wait for elevateKubeSystemPrivileges
	I1216 21:04:45.777522   60829 kubeadm.go:394] duration metric: took 5m2.030533321s to StartCluster
	I1216 21:04:45.777543   60829 settings.go:142] acquiring lock: {Name:mke62e1d1fa6bfae09410847a3fc6f95d0bbbd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:04:45.777644   60829 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 21:04:45.780034   60829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/kubeconfig: {Name:mk67073c6dc9abd712825d4490d6430745897f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:04:45.780369   60829 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.162 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 21:04:45.780450   60829 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 21:04:45.780566   60829 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-327790"
	I1216 21:04:45.780579   60829 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-327790"
	I1216 21:04:45.780595   60829 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-327790"
	W1216 21:04:45.780606   60829 addons.go:243] addon storage-provisioner should already be in state true
	I1216 21:04:45.780599   60829 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-327790"
	I1216 21:04:45.780609   60829 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-327790"
	I1216 21:04:45.780627   60829 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-327790"
	I1216 21:04:45.780627   60829 config.go:182] Loaded profile config "default-k8s-diff-port-327790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	W1216 21:04:45.780638   60829 addons.go:243] addon metrics-server should already be in state true
	I1216 21:04:45.780648   60829 host.go:66] Checking if "default-k8s-diff-port-327790" exists ...
	I1216 21:04:45.780675   60829 host.go:66] Checking if "default-k8s-diff-port-327790" exists ...
	I1216 21:04:45.781091   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:45.781091   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:45.781132   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:45.781136   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:45.781174   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:45.781137   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:45.782022   60829 out.go:177] * Verifying Kubernetes components...
	I1216 21:04:45.783549   60829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 21:04:45.799326   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42295
	I1216 21:04:45.799443   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36833
	I1216 21:04:45.799865   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:45.800491   60829 main.go:141] libmachine: Using API Version  1
	I1216 21:04:45.800510   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:45.800588   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:45.801082   60829 main.go:141] libmachine: Using API Version  1
	I1216 21:04:45.801102   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:45.801178   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37413
	I1216 21:04:45.801202   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:45.801517   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:45.801539   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:45.801707   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetState
	I1216 21:04:45.801925   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:45.801959   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:45.801974   60829 main.go:141] libmachine: Using API Version  1
	I1216 21:04:45.801992   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:45.802319   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:45.802817   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:45.802857   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:45.805750   60829 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-327790"
	W1216 21:04:45.805775   60829 addons.go:243] addon default-storageclass should already be in state true
	I1216 21:04:45.805806   60829 host.go:66] Checking if "default-k8s-diff-port-327790" exists ...
	I1216 21:04:45.806153   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:45.806193   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:45.820545   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46261
	I1216 21:04:45.821062   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:45.821598   60829 main.go:141] libmachine: Using API Version  1
	I1216 21:04:45.821625   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:45.822086   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:45.822294   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetState
	I1216 21:04:45.823995   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 21:04:45.824775   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40091
	I1216 21:04:45.825269   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:45.825754   60829 main.go:141] libmachine: Using API Version  1
	I1216 21:04:45.825778   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:45.826117   60829 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1216 21:04:45.826158   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:45.826843   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:45.826892   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:45.827527   60829 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1216 21:04:45.827557   60829 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1216 21:04:45.827577   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 21:04:45.829352   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34899
	I1216 21:04:45.829769   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:45.830197   60829 main.go:141] libmachine: Using API Version  1
	I1216 21:04:45.830217   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:45.830543   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:45.830767   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetState
	I1216 21:04:45.831413   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 21:04:45.832010   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 21:04:45.832030   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 21:04:45.832202   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 21:04:45.832424   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 21:04:45.832496   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 21:04:45.832847   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 21:04:45.833056   60829 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 21:04:45.834475   60829 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 21:04:45.835944   60829 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 21:04:45.835965   60829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 21:04:45.835983   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 21:04:45.839118   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 21:04:45.839533   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 21:04:45.839560   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 21:04:45.839744   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 21:04:45.839947   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 21:04:45.840087   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 21:04:45.840218   60829 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 21:04:45.845365   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37995
	I1216 21:04:45.845925   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:45.847042   60829 main.go:141] libmachine: Using API Version  1
	I1216 21:04:45.847060   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:45.847450   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:45.847669   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetState
	I1216 21:04:45.849934   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 21:04:45.850165   60829 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 21:04:45.850182   60829 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 21:04:45.850199   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 21:04:45.853083   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 21:04:45.853493   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 21:04:45.853518   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 21:04:45.853679   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 21:04:45.853848   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 21:04:45.854024   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 21:04:45.854177   60829 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 21:04:45.978935   60829 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 21:04:46.010601   60829 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-327790" to be "Ready" ...
	I1216 21:04:46.019674   60829 node_ready.go:49] node "default-k8s-diff-port-327790" has status "Ready":"True"
	I1216 21:04:46.019704   60829 node_ready.go:38] duration metric: took 9.066576ms for node "default-k8s-diff-port-327790" to be "Ready" ...
	I1216 21:04:46.019715   60829 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:04:46.033957   60829 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:46.103779   60829 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1216 21:04:46.103812   60829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1216 21:04:46.120299   60829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 21:04:46.171131   60829 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1216 21:04:46.171171   60829 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1216 21:04:46.171280   60829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 21:04:46.244556   60829 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 21:04:46.244587   60829 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1216 21:04:46.332646   60829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 21:04:47.461793   60829 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.34145582s)
	I1216 21:04:47.461871   60829 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.129193295s)
	I1216 21:04:47.461793   60829 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.290486436s)
	I1216 21:04:47.461899   60829 main.go:141] libmachine: Making call to close driver server
	I1216 21:04:47.461913   60829 main.go:141] libmachine: Making call to close driver server
	I1216 21:04:47.461918   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Close
	I1216 21:04:47.461875   60829 main.go:141] libmachine: Making call to close driver server
	I1216 21:04:47.461982   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Close
	I1216 21:04:47.461927   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Close
	I1216 21:04:47.462463   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Closing plugin on server side
	I1216 21:04:47.462469   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Closing plugin on server side
	I1216 21:04:47.462480   60829 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:04:47.462488   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Closing plugin on server side
	I1216 21:04:47.462494   60829 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:04:47.462504   60829 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:04:47.462506   60829 main.go:141] libmachine: Making call to close driver server
	I1216 21:04:47.462511   60829 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:04:47.462516   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Close
	I1216 21:04:47.462521   60829 main.go:141] libmachine: Making call to close driver server
	I1216 21:04:47.462529   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Close
	I1216 21:04:47.462556   60829 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:04:47.462573   60829 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:04:47.462581   60829 main.go:141] libmachine: Making call to close driver server
	I1216 21:04:47.462588   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Close
	I1216 21:04:47.462805   60829 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:04:47.462816   60829 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:04:47.462816   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Closing plugin on server side
	I1216 21:04:47.462827   60829 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-327790"
	I1216 21:04:47.462841   60829 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:04:47.462848   60829 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:04:47.463049   60829 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:04:47.463067   60829 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:04:47.524466   60829 main.go:141] libmachine: Making call to close driver server
	I1216 21:04:47.524497   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Close
	I1216 21:04:47.524822   60829 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:04:47.524843   60829 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:04:47.524869   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Closing plugin on server side
	I1216 21:04:47.526679   60829 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I1216 21:04:45.861404   60421 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.911759863s)
	I1216 21:04:45.861483   60421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:04:45.889560   60421 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 21:04:45.922090   60421 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:04:45.945227   60421 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:04:45.945261   60421 kubeadm.go:157] found existing configuration files:
	
	I1216 21:04:45.945306   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 21:04:45.960594   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:04:45.960666   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:04:45.980613   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 21:04:46.005349   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:04:46.005431   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:04:46.021683   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 21:04:46.032967   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:04:46.033042   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:04:46.064718   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 21:04:46.078736   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:04:46.078805   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:04:46.092798   60421 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 21:04:46.293434   60421 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 21:04:45.430910   60215 pod_ready.go:82] duration metric: took 4m0.000948437s for pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace to be "Ready" ...
	E1216 21:04:45.430950   60215 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace to be "Ready" (will not retry!)
	I1216 21:04:45.430970   60215 pod_ready.go:39] duration metric: took 4m12.926677248s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:04:45.431002   60215 kubeadm.go:597] duration metric: took 4m20.847109652s to restartPrimaryControlPlane
	W1216 21:04:45.431059   60215 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 21:04:45.431092   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 21:04:47.527909   60829 addons.go:510] duration metric: took 1.747463467s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I1216 21:04:48.047956   60829 pod_ready.go:103] pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:51.781856   60933 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1216 21:04:51.782285   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:04:51.782543   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:04:54.704462   60421 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I1216 21:04:54.704514   60421 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 21:04:54.704600   60421 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 21:04:54.704736   60421 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 21:04:54.704839   60421 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 21:04:54.704894   60421 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 21:04:54.706650   60421 out.go:235]   - Generating certificates and keys ...
	I1216 21:04:54.706771   60421 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 21:04:54.706865   60421 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 21:04:54.706999   60421 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 21:04:54.707113   60421 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 21:04:54.707256   60421 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 21:04:54.707344   60421 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 21:04:54.707478   60421 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 21:04:54.707573   60421 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 21:04:54.707683   60421 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 21:04:54.707806   60421 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 21:04:54.707851   60421 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 21:04:54.707902   60421 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 21:04:54.707968   60421 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 21:04:54.708056   60421 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 21:04:54.708127   60421 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 21:04:54.708225   60421 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 21:04:54.708305   60421 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 21:04:54.708427   60421 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 21:04:54.708526   60421 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 21:04:54.710014   60421 out.go:235]   - Booting up control plane ...
	I1216 21:04:54.710113   60421 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 21:04:54.710197   60421 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 21:04:54.710254   60421 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 21:04:54.710361   60421 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 21:04:54.710457   60421 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 21:04:54.710511   60421 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 21:04:54.710670   60421 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 21:04:54.710792   60421 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 21:04:54.710852   60421 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.532878ms
	I1216 21:04:54.710912   60421 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1216 21:04:54.710982   60421 kubeadm.go:310] [api-check] The API server is healthy after 5.50189872s
	I1216 21:04:54.711125   60421 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 21:04:54.711281   60421 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 21:04:54.711362   60421 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 21:04:54.711618   60421 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-232338 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 21:04:54.711712   60421 kubeadm.go:310] [bootstrap-token] Using token: knn1cl.i9horbjuutctjfyf
	I1216 21:04:54.714363   60421 out.go:235]   - Configuring RBAC rules ...
	I1216 21:04:54.714488   60421 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 21:04:54.714560   60421 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 21:04:54.714674   60421 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 21:04:54.714820   60421 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 21:04:54.714914   60421 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 21:04:54.714981   60421 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 21:04:54.715083   60421 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 21:04:54.715159   60421 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1216 21:04:54.715228   60421 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1216 21:04:54.715237   60421 kubeadm.go:310] 
	I1216 21:04:54.715345   60421 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1216 21:04:54.715359   60421 kubeadm.go:310] 
	I1216 21:04:54.715455   60421 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1216 21:04:54.715463   60421 kubeadm.go:310] 
	I1216 21:04:54.715510   60421 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1216 21:04:54.715596   60421 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 21:04:54.715669   60421 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 21:04:54.715679   60421 kubeadm.go:310] 
	I1216 21:04:54.715767   60421 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1216 21:04:54.715775   60421 kubeadm.go:310] 
	I1216 21:04:54.715842   60421 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 21:04:54.715851   60421 kubeadm.go:310] 
	I1216 21:04:54.715908   60421 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1216 21:04:54.715969   60421 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 21:04:54.716026   60421 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 21:04:54.716032   60421 kubeadm.go:310] 
	I1216 21:04:54.716106   60421 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 21:04:54.716171   60421 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1216 21:04:54.716177   60421 kubeadm.go:310] 
	I1216 21:04:54.716240   60421 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token knn1cl.i9horbjuutctjfyf \
	I1216 21:04:54.716340   60421 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e03b60b144334bf383a3d22daeca854a6b4004373f1847ba3afcb85a998b5735 \
	I1216 21:04:54.716374   60421 kubeadm.go:310] 	--control-plane 
	I1216 21:04:54.716384   60421 kubeadm.go:310] 
	I1216 21:04:54.716457   60421 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1216 21:04:54.716467   60421 kubeadm.go:310] 
	I1216 21:04:54.716534   60421 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token knn1cl.i9horbjuutctjfyf \
	I1216 21:04:54.716634   60421 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e03b60b144334bf383a3d22daeca854a6b4004373f1847ba3afcb85a998b5735 
	I1216 21:04:54.716644   60421 cni.go:84] Creating CNI manager for ""
	I1216 21:04:54.716651   60421 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:04:54.718260   60421 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 21:04:50.542207   60829 pod_ready.go:103] pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:52.542453   60829 pod_ready.go:103] pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:55.040960   60829 pod_ready.go:103] pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:56.042145   60829 pod_ready.go:93] pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:04:56.042175   60829 pod_ready.go:82] duration metric: took 10.008191514s for pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:56.042192   60829 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:56.047996   60829 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:04:56.048022   60829 pod_ready.go:82] duration metric: took 5.821217ms for pod "kube-apiserver-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:56.048031   60829 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:56.052582   60829 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:04:56.052608   60829 pod_ready.go:82] duration metric: took 4.569092ms for pod "kube-controller-manager-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:56.052619   60829 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:56.056805   60829 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:04:56.056834   60829 pod_ready.go:82] duration metric: took 4.206726ms for pod "kube-scheduler-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:56.056841   60829 pod_ready.go:39] duration metric: took 10.037112061s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:04:56.056855   60829 api_server.go:52] waiting for apiserver process to appear ...
	I1216 21:04:56.056904   60829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:04:56.076993   60829 api_server.go:72] duration metric: took 10.296578804s to wait for apiserver process to appear ...
	I1216 21:04:56.077023   60829 api_server.go:88] waiting for apiserver healthz status ...
	I1216 21:04:56.077045   60829 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1216 21:04:56.082250   60829 api_server.go:279] https://192.168.39.162:8444/healthz returned 200:
	ok
	I1216 21:04:56.083348   60829 api_server.go:141] control plane version: v1.32.0
	I1216 21:04:56.083369   60829 api_server.go:131] duration metric: took 6.339438ms to wait for apiserver health ...
	I1216 21:04:56.083377   60829 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 21:04:56.090255   60829 system_pods.go:59] 9 kube-system pods found
	I1216 21:04:56.090290   60829 system_pods.go:61] "coredns-668d6bf9bc-2qcfx" [4ac98efa-96ff-4564-93de-4a61de7a6507] Running
	I1216 21:04:56.090302   60829 system_pods.go:61] "coredns-668d6bf9bc-fb7wx" [f2f2c0e7-893f-45ba-8da9-3b03f5560d89] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:04:56.090310   60829 system_pods.go:61] "etcd-default-k8s-diff-port-327790" [5363e160-ef78-4737-89f9-5f4d0f0eab95] Running
	I1216 21:04:56.090318   60829 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-327790" [b53c6be6-e476-4a5a-80c2-96e701736820] Running
	I1216 21:04:56.090324   60829 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-327790" [57d8747a-7258-48c3-9bcd-6fedaa8b7431] Running
	I1216 21:04:56.090329   60829 system_pods.go:61] "kube-proxy-njqp8" [e5f1789d-b343-4c2e-b078-4a15f4b18569] Running
	I1216 21:04:56.090334   60829 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-327790" [e2303bbd-b9d9-4392-867f-6f5f43f74826] Running
	I1216 21:04:56.090342   60829 system_pods.go:61] "metrics-server-f79f97bbb-84xtf" [569c6717-dc12-474f-8156-d2dd9e410a54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:04:56.090349   60829 system_pods.go:61] "storage-provisioner" [4e5b12f0-3d96-4dd0-81e7-300b82058d47] Running
	I1216 21:04:56.090360   60829 system_pods.go:74] duration metric: took 6.975795ms to wait for pod list to return data ...
	I1216 21:04:56.090373   60829 default_sa.go:34] waiting for default service account to be created ...
	I1216 21:04:56.093967   60829 default_sa.go:45] found service account: "default"
	I1216 21:04:56.093998   60829 default_sa.go:55] duration metric: took 3.616693ms for default service account to be created ...
	I1216 21:04:56.094010   60829 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 21:04:56.241532   60829 system_pods.go:86] 9 kube-system pods found
	I1216 21:04:56.241568   60829 system_pods.go:89] "coredns-668d6bf9bc-2qcfx" [4ac98efa-96ff-4564-93de-4a61de7a6507] Running
	I1216 21:04:56.241582   60829 system_pods.go:89] "coredns-668d6bf9bc-fb7wx" [f2f2c0e7-893f-45ba-8da9-3b03f5560d89] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:04:56.241589   60829 system_pods.go:89] "etcd-default-k8s-diff-port-327790" [5363e160-ef78-4737-89f9-5f4d0f0eab95] Running
	I1216 21:04:56.241597   60829 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-327790" [b53c6be6-e476-4a5a-80c2-96e701736820] Running
	I1216 21:04:56.241605   60829 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-327790" [57d8747a-7258-48c3-9bcd-6fedaa8b7431] Running
	I1216 21:04:56.241611   60829 system_pods.go:89] "kube-proxy-njqp8" [e5f1789d-b343-4c2e-b078-4a15f4b18569] Running
	I1216 21:04:56.241617   60829 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-327790" [e2303bbd-b9d9-4392-867f-6f5f43f74826] Running
	I1216 21:04:56.241624   60829 system_pods.go:89] "metrics-server-f79f97bbb-84xtf" [569c6717-dc12-474f-8156-d2dd9e410a54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:04:56.241629   60829 system_pods.go:89] "storage-provisioner" [4e5b12f0-3d96-4dd0-81e7-300b82058d47] Running
	I1216 21:04:56.241639   60829 system_pods.go:126] duration metric: took 147.621114ms to wait for k8s-apps to be running ...
	I1216 21:04:56.241656   60829 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 21:04:56.241730   60829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:04:56.258891   60829 system_svc.go:56] duration metric: took 17.227056ms WaitForService to wait for kubelet
	I1216 21:04:56.258935   60829 kubeadm.go:582] duration metric: took 10.478521255s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 21:04:56.258962   60829 node_conditions.go:102] verifying NodePressure condition ...
	I1216 21:04:56.438641   60829 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 21:04:56.438667   60829 node_conditions.go:123] node cpu capacity is 2
	I1216 21:04:56.438679   60829 node_conditions.go:105] duration metric: took 179.711624ms to run NodePressure ...
	I1216 21:04:56.438692   60829 start.go:241] waiting for startup goroutines ...
	I1216 21:04:56.438700   60829 start.go:246] waiting for cluster config update ...
	I1216 21:04:56.438714   60829 start.go:255] writing updated cluster config ...
	I1216 21:04:56.438975   60829 ssh_runner.go:195] Run: rm -f paused
	I1216 21:04:56.490195   60829 start.go:600] kubectl: 1.32.0, cluster: 1.32.0 (minor skew: 0)
	I1216 21:04:56.492395   60829 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-327790" cluster and "default" namespace by default
	I1216 21:04:54.719483   60421 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 21:04:54.732035   60421 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 21:04:54.754010   60421 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 21:04:54.754122   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:54.754177   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-232338 minikube.k8s.io/updated_at=2024_12_16T21_04_54_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=74e51ab701402ddc00f8ba70f2a2775c7dcd6477 minikube.k8s.io/name=no-preload-232338 minikube.k8s.io/primary=true
	I1216 21:04:54.773008   60421 ops.go:34] apiserver oom_adj: -16
	I1216 21:04:55.009573   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:55.510039   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:56.009645   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:56.509608   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:57.009714   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:57.509902   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:58.009901   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:58.509631   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:58.632896   60421 kubeadm.go:1113] duration metric: took 3.878846316s to wait for elevateKubeSystemPrivileges
	I1216 21:04:58.632933   60421 kubeadm.go:394] duration metric: took 5m23.15322559s to StartCluster
	I1216 21:04:58.632951   60421 settings.go:142] acquiring lock: {Name:mke62e1d1fa6bfae09410847a3fc6f95d0bbbd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:04:58.633031   60421 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 21:04:58.635409   60421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/kubeconfig: {Name:mk67073c6dc9abd712825d4490d6430745897f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:04:58.635720   60421 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.240 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 21:04:58.635835   60421 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 21:04:58.635944   60421 addons.go:69] Setting storage-provisioner=true in profile "no-preload-232338"
	I1216 21:04:58.635958   60421 config.go:182] Loaded profile config "no-preload-232338": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 21:04:58.635966   60421 addons.go:234] Setting addon storage-provisioner=true in "no-preload-232338"
	I1216 21:04:58.635969   60421 addons.go:69] Setting default-storageclass=true in profile "no-preload-232338"
	W1216 21:04:58.635975   60421 addons.go:243] addon storage-provisioner should already be in state true
	I1216 21:04:58.635986   60421 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-232338"
	I1216 21:04:58.636005   60421 host.go:66] Checking if "no-preload-232338" exists ...
	I1216 21:04:58.635997   60421 addons.go:69] Setting metrics-server=true in profile "no-preload-232338"
	I1216 21:04:58.636029   60421 addons.go:234] Setting addon metrics-server=true in "no-preload-232338"
	W1216 21:04:58.636038   60421 addons.go:243] addon metrics-server should already be in state true
	I1216 21:04:58.636069   60421 host.go:66] Checking if "no-preload-232338" exists ...
	I1216 21:04:58.636428   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:58.636460   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:58.636428   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:58.636513   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:58.636532   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:58.636549   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:58.637558   60421 out.go:177] * Verifying Kubernetes components...
	I1216 21:04:58.639254   60421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 21:04:58.652770   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43305
	I1216 21:04:58.652789   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35093
	I1216 21:04:58.653247   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:58.653368   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:58.653818   60421 main.go:141] libmachine: Using API Version  1
	I1216 21:04:58.653836   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:58.653944   60421 main.go:141] libmachine: Using API Version  1
	I1216 21:04:58.653963   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:58.654562   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:58.654565   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:58.654775   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetState
	I1216 21:04:58.655078   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:58.655117   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:58.656383   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38087
	I1216 21:04:58.656987   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:58.657520   60421 main.go:141] libmachine: Using API Version  1
	I1216 21:04:58.657553   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:58.657933   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:58.658517   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:58.658566   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:58.658692   60421 addons.go:234] Setting addon default-storageclass=true in "no-preload-232338"
	W1216 21:04:58.658708   60421 addons.go:243] addon default-storageclass should already be in state true
	I1216 21:04:58.658737   60421 host.go:66] Checking if "no-preload-232338" exists ...
	I1216 21:04:58.659001   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:58.659043   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:58.672942   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34153
	I1216 21:04:58.673478   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:58.674034   60421 main.go:141] libmachine: Using API Version  1
	I1216 21:04:58.674063   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:58.674421   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:58.674594   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37541
	I1216 21:04:58.674614   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetState
	I1216 21:04:58.674994   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:58.675686   60421 main.go:141] libmachine: Using API Version  1
	I1216 21:04:58.675699   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:58.676334   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:58.676480   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 21:04:58.676898   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:58.676931   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:58.679230   60421 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1216 21:04:58.680032   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33309
	I1216 21:04:58.680609   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:58.680754   60421 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1216 21:04:58.680772   60421 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1216 21:04:58.680794   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 21:04:58.681202   60421 main.go:141] libmachine: Using API Version  1
	I1216 21:04:58.681221   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:58.681610   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:58.681815   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetState
	I1216 21:04:58.683608   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 21:04:58.684266   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 21:04:58.684765   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 21:04:58.684793   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 21:04:58.684925   60421 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 21:04:56.783069   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:04:56.783323   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:04:58.684932   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 21:04:58.685156   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 21:04:58.685321   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 21:04:58.685515   60421 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 21:04:58.686360   60421 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 21:04:58.686379   60421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 21:04:58.686396   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 21:04:58.689909   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 21:04:58.690365   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 21:04:58.690392   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 21:04:58.690698   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 21:04:58.690927   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 21:04:58.691095   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 21:04:58.691305   60421 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 21:04:58.695899   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36017
	I1216 21:04:58.696274   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:58.696758   60421 main.go:141] libmachine: Using API Version  1
	I1216 21:04:58.696777   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:58.697064   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:58.697225   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetState
	I1216 21:04:58.698530   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 21:04:58.698751   60421 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 21:04:58.698766   60421 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 21:04:58.698784   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 21:04:58.701986   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 21:04:58.702420   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 21:04:58.702473   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 21:04:58.702655   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 21:04:58.702839   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 21:04:58.702979   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 21:04:58.703197   60421 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 21:04:58.866115   60421 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 21:04:58.892287   60421 node_ready.go:35] waiting up to 6m0s for node "no-preload-232338" to be "Ready" ...
	I1216 21:04:58.949580   60421 node_ready.go:49] node "no-preload-232338" has status "Ready":"True"
	I1216 21:04:58.949610   60421 node_ready.go:38] duration metric: took 57.274849ms for node "no-preload-232338" to be "Ready" ...
	I1216 21:04:58.949622   60421 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:04:58.983955   60421 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-4wwvd" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:59.036124   60421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 21:04:59.039113   60421 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1216 21:04:59.039139   60421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1216 21:04:59.087493   60421 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1216 21:04:59.087531   60421 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1216 21:04:59.094976   60421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 21:04:59.129816   60421 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 21:04:59.129840   60421 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1216 21:04:59.236390   60421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 21:05:00.157688   60421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.121522553s)
	I1216 21:05:00.157736   60421 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:00.157751   60421 main.go:141] libmachine: (no-preload-232338) Calling .Close
	I1216 21:05:00.157764   60421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.06274536s)
	I1216 21:05:00.157830   60421 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:00.157851   60421 main.go:141] libmachine: (no-preload-232338) Calling .Close
	I1216 21:05:00.158259   60421 main.go:141] libmachine: (no-preload-232338) DBG | Closing plugin on server side
	I1216 21:05:00.158270   60421 main.go:141] libmachine: (no-preload-232338) DBG | Closing plugin on server side
	I1216 21:05:00.158282   60421 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:00.158288   60421 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:00.158297   60421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:00.158314   60421 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:00.158327   60421 main.go:141] libmachine: (no-preload-232338) Calling .Close
	I1216 21:05:00.158319   60421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:00.158344   60421 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:00.158352   60421 main.go:141] libmachine: (no-preload-232338) Calling .Close
	I1216 21:05:00.158589   60421 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:00.158604   60421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:00.158624   60421 main.go:141] libmachine: (no-preload-232338) DBG | Closing plugin on server side
	I1216 21:05:00.158589   60421 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:00.158655   60421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:00.182819   60421 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:00.182844   60421 main.go:141] libmachine: (no-preload-232338) Calling .Close
	I1216 21:05:00.183229   60421 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:00.183271   60421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:00.679810   60421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.44337328s)
	I1216 21:05:00.679867   60421 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:00.679880   60421 main.go:141] libmachine: (no-preload-232338) Calling .Close
	I1216 21:05:00.680233   60421 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:00.680254   60421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:00.680266   60421 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:00.680274   60421 main.go:141] libmachine: (no-preload-232338) Calling .Close
	I1216 21:05:00.680612   60421 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:00.680632   60421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:00.680643   60421 addons.go:475] Verifying addon metrics-server=true in "no-preload-232338"
	I1216 21:05:00.680659   60421 main.go:141] libmachine: (no-preload-232338) DBG | Closing plugin on server side
	I1216 21:05:00.682400   60421 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1216 21:05:00.684226   60421 addons.go:510] duration metric: took 2.048395371s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1216 21:05:00.997599   60421 pod_ready.go:103] pod "coredns-668d6bf9bc-4wwvd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:05:01.990706   60421 pod_ready.go:93] pod "coredns-668d6bf9bc-4wwvd" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:01.990733   60421 pod_ready.go:82] duration metric: took 3.006750411s for pod "coredns-668d6bf9bc-4wwvd" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:01.990742   60421 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-c4qfj" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:03.998055   60421 pod_ready.go:103] pod "coredns-668d6bf9bc-c4qfj" in "kube-system" namespace has status "Ready":"False"
	I1216 21:05:05.997310   60421 pod_ready.go:93] pod "coredns-668d6bf9bc-c4qfj" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:05.997334   60421 pod_ready.go:82] duration metric: took 4.006586503s for pod "coredns-668d6bf9bc-c4qfj" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:05.997346   60421 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.002576   60421 pod_ready.go:93] pod "etcd-no-preload-232338" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:06.002597   60421 pod_ready.go:82] duration metric: took 5.244238ms for pod "etcd-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.002607   60421 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.007407   60421 pod_ready.go:93] pod "kube-apiserver-no-preload-232338" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:06.007435   60421 pod_ready.go:82] duration metric: took 4.820838ms for pod "kube-apiserver-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.007449   60421 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.012239   60421 pod_ready.go:93] pod "kube-controller-manager-no-preload-232338" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:06.012263   60421 pod_ready.go:82] duration metric: took 4.806874ms for pod "kube-controller-manager-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.012273   60421 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-m5hq8" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.017087   60421 pod_ready.go:93] pod "kube-proxy-m5hq8" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:06.017111   60421 pod_ready.go:82] duration metric: took 4.830348ms for pod "kube-proxy-m5hq8" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.017124   60421 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.393947   60421 pod_ready.go:93] pod "kube-scheduler-no-preload-232338" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:06.393978   60421 pod_ready.go:82] duration metric: took 376.845934ms for pod "kube-scheduler-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.393989   60421 pod_ready.go:39] duration metric: took 7.444356073s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:05:06.394008   60421 api_server.go:52] waiting for apiserver process to appear ...
	I1216 21:05:06.394074   60421 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:05:06.410287   60421 api_server.go:72] duration metric: took 7.774519412s to wait for apiserver process to appear ...
	I1216 21:05:06.410327   60421 api_server.go:88] waiting for apiserver healthz status ...
	I1216 21:05:06.410363   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:05:06.415344   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 200:
	ok
	I1216 21:05:06.416302   60421 api_server.go:141] control plane version: v1.32.0
	I1216 21:05:06.416324   60421 api_server.go:131] duration metric: took 5.989768ms to wait for apiserver health ...
	I1216 21:05:06.416333   60421 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 21:05:06.598174   60421 system_pods.go:59] 9 kube-system pods found
	I1216 21:05:06.598205   60421 system_pods.go:61] "coredns-668d6bf9bc-4wwvd" [1c63ab10-dfdd-4aca-b39f-bc9b0e028e5e] Running
	I1216 21:05:06.598210   60421 system_pods.go:61] "coredns-668d6bf9bc-c4qfj" [b9bf3125-1e6d-4794-a2e6-2ff7ed5132b1] Running
	I1216 21:05:06.598214   60421 system_pods.go:61] "etcd-no-preload-232338" [5318f756-4c64-46be-b71b-94d53f48f0e9] Running
	I1216 21:05:06.598218   60421 system_pods.go:61] "kube-apiserver-no-preload-232338" [8d8fa68c-80ab-4747-a2ce-eeaff8847c29] Running
	I1216 21:05:06.598222   60421 system_pods.go:61] "kube-controller-manager-no-preload-232338" [8626806c-cd3f-488c-95c3-4b909878c1e4] Running
	I1216 21:05:06.598224   60421 system_pods.go:61] "kube-proxy-m5hq8" [ca0d357a-dda2-4508-a954-5c67eaf5b8ac] Running
	I1216 21:05:06.598229   60421 system_pods.go:61] "kube-scheduler-no-preload-232338" [8944107e-9e5c-474b-a0c1-9461e797a131] Running
	I1216 21:05:06.598236   60421 system_pods.go:61] "metrics-server-f79f97bbb-l7dcr" [fabafb40-1cb8-427b-88a6-37eeb6fd5b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:05:06.598240   60421 system_pods.go:61] "storage-provisioner" [3b742666-dfd4-4c9b-95a9-25367ec2a718] Running
	I1216 21:05:06.598248   60421 system_pods.go:74] duration metric: took 181.908567ms to wait for pod list to return data ...
	I1216 21:05:06.598255   60421 default_sa.go:34] waiting for default service account to be created ...
	I1216 21:05:06.794774   60421 default_sa.go:45] found service account: "default"
	I1216 21:05:06.794805   60421 default_sa.go:55] duration metric: took 196.542698ms for default service account to be created ...
	I1216 21:05:06.794823   60421 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 21:05:06.998297   60421 system_pods.go:86] 9 kube-system pods found
	I1216 21:05:06.998332   60421 system_pods.go:89] "coredns-668d6bf9bc-4wwvd" [1c63ab10-dfdd-4aca-b39f-bc9b0e028e5e] Running
	I1216 21:05:06.998341   60421 system_pods.go:89] "coredns-668d6bf9bc-c4qfj" [b9bf3125-1e6d-4794-a2e6-2ff7ed5132b1] Running
	I1216 21:05:06.998348   60421 system_pods.go:89] "etcd-no-preload-232338" [5318f756-4c64-46be-b71b-94d53f48f0e9] Running
	I1216 21:05:06.998354   60421 system_pods.go:89] "kube-apiserver-no-preload-232338" [8d8fa68c-80ab-4747-a2ce-eeaff8847c29] Running
	I1216 21:05:06.998359   60421 system_pods.go:89] "kube-controller-manager-no-preload-232338" [8626806c-cd3f-488c-95c3-4b909878c1e4] Running
	I1216 21:05:06.998364   60421 system_pods.go:89] "kube-proxy-m5hq8" [ca0d357a-dda2-4508-a954-5c67eaf5b8ac] Running
	I1216 21:05:06.998369   60421 system_pods.go:89] "kube-scheduler-no-preload-232338" [8944107e-9e5c-474b-a0c1-9461e797a131] Running
	I1216 21:05:06.998378   60421 system_pods.go:89] "metrics-server-f79f97bbb-l7dcr" [fabafb40-1cb8-427b-88a6-37eeb6fd5b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:05:06.998385   60421 system_pods.go:89] "storage-provisioner" [3b742666-dfd4-4c9b-95a9-25367ec2a718] Running
	I1216 21:05:06.998397   60421 system_pods.go:126] duration metric: took 203.564807ms to wait for k8s-apps to be running ...
	I1216 21:05:06.998407   60421 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 21:05:06.998457   60421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:05:07.014979   60421 system_svc.go:56] duration metric: took 16.561363ms WaitForService to wait for kubelet
	I1216 21:05:07.015013   60421 kubeadm.go:582] duration metric: took 8.379260538s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 21:05:07.015029   60421 node_conditions.go:102] verifying NodePressure condition ...
	I1216 21:05:07.195470   60421 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 21:05:07.195504   60421 node_conditions.go:123] node cpu capacity is 2
	I1216 21:05:07.195516   60421 node_conditions.go:105] duration metric: took 180.480949ms to run NodePressure ...
	I1216 21:05:07.195530   60421 start.go:241] waiting for startup goroutines ...
	I1216 21:05:07.195541   60421 start.go:246] waiting for cluster config update ...
	I1216 21:05:07.195554   60421 start.go:255] writing updated cluster config ...
	I1216 21:05:07.195857   60421 ssh_runner.go:195] Run: rm -f paused
	I1216 21:05:07.244442   60421 start.go:600] kubectl: 1.32.0, cluster: 1.32.0 (minor skew: 0)
	I1216 21:05:07.246788   60421 out.go:177] * Done! kubectl is now configured to use "no-preload-232338" cluster and "default" namespace by default
	I1216 21:05:06.784032   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:05:06.784224   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:05:13.066274   60215 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.635155592s)
	I1216 21:05:13.066379   60215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:05:13.096145   60215 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 21:05:13.109211   60215 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:05:13.125828   60215 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:05:13.125859   60215 kubeadm.go:157] found existing configuration files:
	
	I1216 21:05:13.125914   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 21:05:13.146982   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:05:13.147053   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:05:13.159382   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 21:05:13.176492   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:05:13.176572   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:05:13.190933   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 21:05:13.213230   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:05:13.213301   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:05:13.224631   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 21:05:13.234914   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:05:13.234975   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:05:13.245513   60215 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 21:05:13.300399   60215 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I1216 21:05:13.300491   60215 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 21:05:13.424114   60215 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 21:05:13.424252   60215 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 21:05:13.424372   60215 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 21:05:13.434507   60215 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 21:05:13.436710   60215 out.go:235]   - Generating certificates and keys ...
	I1216 21:05:13.436825   60215 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 21:05:13.436985   60215 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 21:05:13.437127   60215 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 21:05:13.437215   60215 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 21:05:13.437317   60215 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 21:05:13.437404   60215 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 21:05:13.437822   60215 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 21:05:13.438183   60215 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 21:05:13.438724   60215 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 21:05:13.439096   60215 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 21:05:13.439334   60215 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 21:05:13.439399   60215 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 21:05:13.528853   60215 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 21:05:13.700795   60215 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 21:05:13.890142   60215 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 21:05:14.166151   60215 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 21:05:14.310513   60215 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 21:05:14.311121   60215 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 21:05:14.317114   60215 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 21:05:14.319080   60215 out.go:235]   - Booting up control plane ...
	I1216 21:05:14.319218   60215 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 21:05:14.319332   60215 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 21:05:14.319518   60215 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 21:05:14.340394   60215 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 21:05:14.348443   60215 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 21:05:14.348533   60215 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 21:05:14.493244   60215 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 21:05:14.493456   60215 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 21:05:14.995210   60215 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.042805ms
	I1216 21:05:14.995325   60215 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1216 21:05:20.000911   60215 kubeadm.go:310] [api-check] The API server is healthy after 5.002773967s
	I1216 21:05:20.019851   60215 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 21:05:20.037375   60215 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 21:05:20.074003   60215 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 21:05:20.074237   60215 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-606219 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 21:05:20.087136   60215 kubeadm.go:310] [bootstrap-token] Using token: wev02f.lvhctqt9pq1agi1c
	I1216 21:05:20.088742   60215 out.go:235]   - Configuring RBAC rules ...
	I1216 21:05:20.088893   60215 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 21:05:20.094114   60215 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 21:05:20.101979   60215 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 21:05:20.105419   60215 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 21:05:20.112443   60215 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 21:05:20.116045   60215 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 21:05:20.406790   60215 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 21:05:20.844101   60215 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1216 21:05:21.414298   60215 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1216 21:05:21.414397   60215 kubeadm.go:310] 
	I1216 21:05:21.414488   60215 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1216 21:05:21.414504   60215 kubeadm.go:310] 
	I1216 21:05:21.414636   60215 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1216 21:05:21.414655   60215 kubeadm.go:310] 
	I1216 21:05:21.414694   60215 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1216 21:05:21.414796   60215 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 21:05:21.414866   60215 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 21:05:21.414877   60215 kubeadm.go:310] 
	I1216 21:05:21.414978   60215 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1216 21:05:21.415004   60215 kubeadm.go:310] 
	I1216 21:05:21.415071   60215 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 21:05:21.415080   60215 kubeadm.go:310] 
	I1216 21:05:21.415147   60215 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1216 21:05:21.415314   60215 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 21:05:21.415424   60215 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 21:05:21.415444   60215 kubeadm.go:310] 
	I1216 21:05:21.415568   60215 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 21:05:21.415674   60215 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1216 21:05:21.415690   60215 kubeadm.go:310] 
	I1216 21:05:21.415837   60215 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wev02f.lvhctqt9pq1agi1c \
	I1216 21:05:21.415982   60215 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e03b60b144334bf383a3d22daeca854a6b4004373f1847ba3afcb85a998b5735 \
	I1216 21:05:21.416023   60215 kubeadm.go:310] 	--control-plane 
	I1216 21:05:21.416033   60215 kubeadm.go:310] 
	I1216 21:05:21.416152   60215 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1216 21:05:21.416165   60215 kubeadm.go:310] 
	I1216 21:05:21.416295   60215 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wev02f.lvhctqt9pq1agi1c \
	I1216 21:05:21.416452   60215 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e03b60b144334bf383a3d22daeca854a6b4004373f1847ba3afcb85a998b5735 
	I1216 21:05:21.417157   60215 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 21:05:21.417251   60215 cni.go:84] Creating CNI manager for ""
	I1216 21:05:21.417265   60215 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:05:21.418899   60215 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 21:05:21.420240   60215 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 21:05:21.438639   60215 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 21:05:21.470443   60215 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 21:05:21.470525   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:21.470552   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-606219 minikube.k8s.io/updated_at=2024_12_16T21_05_21_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=74e51ab701402ddc00f8ba70f2a2775c7dcd6477 minikube.k8s.io/name=embed-certs-606219 minikube.k8s.io/primary=true
	I1216 21:05:21.721162   60215 ops.go:34] apiserver oom_adj: -16
	I1216 21:05:21.721292   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:22.221634   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:22.722431   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:23.221436   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:23.721948   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:24.222009   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:24.722203   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:24.835684   60215 kubeadm.go:1113] duration metric: took 3.36522517s to wait for elevateKubeSystemPrivileges
	I1216 21:05:24.835729   60215 kubeadm.go:394] duration metric: took 5m0.316036708s to StartCluster
	I1216 21:05:24.835751   60215 settings.go:142] acquiring lock: {Name:mke62e1d1fa6bfae09410847a3fc6f95d0bbbd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:05:24.835847   60215 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 21:05:24.838279   60215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/kubeconfig: {Name:mk67073c6dc9abd712825d4490d6430745897f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:05:24.838580   60215 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.151 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 21:05:24.838625   60215 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 21:05:24.838747   60215 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-606219"
	I1216 21:05:24.838768   60215 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-606219"
	W1216 21:05:24.838789   60215 addons.go:243] addon storage-provisioner should already be in state true
	I1216 21:05:24.838816   60215 config.go:182] Loaded profile config "embed-certs-606219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 21:05:24.838825   60215 addons.go:69] Setting default-storageclass=true in profile "embed-certs-606219"
	I1216 21:05:24.838832   60215 addons.go:69] Setting metrics-server=true in profile "embed-certs-606219"
	I1216 21:05:24.838846   60215 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-606219"
	I1216 21:05:24.838822   60215 host.go:66] Checking if "embed-certs-606219" exists ...
	I1216 21:05:24.838848   60215 addons.go:234] Setting addon metrics-server=true in "embed-certs-606219"
	W1216 21:05:24.838945   60215 addons.go:243] addon metrics-server should already be in state true
	I1216 21:05:24.838965   60215 host.go:66] Checking if "embed-certs-606219" exists ...
	I1216 21:05:24.839285   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:05:24.839292   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:05:24.839331   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:05:24.839364   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:05:24.839415   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:05:24.839496   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:05:24.843833   60215 out.go:177] * Verifying Kubernetes components...
	I1216 21:05:24.845341   60215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 21:05:24.857648   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39513
	I1216 21:05:24.858457   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:05:24.859021   60215 main.go:141] libmachine: Using API Version  1
	I1216 21:05:24.859037   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:05:24.861356   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36663
	I1216 21:05:24.861406   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44685
	I1216 21:05:24.861357   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:05:24.861844   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:05:24.862150   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:05:24.862188   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:05:24.862315   60215 main.go:141] libmachine: Using API Version  1
	I1216 21:05:24.862334   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:05:24.862334   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:05:24.862661   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:05:24.862876   60215 main.go:141] libmachine: Using API Version  1
	I1216 21:05:24.862894   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:05:24.863171   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:05:24.863200   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:05:24.863634   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:05:24.863964   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetState
	I1216 21:05:24.867371   60215 addons.go:234] Setting addon default-storageclass=true in "embed-certs-606219"
	W1216 21:05:24.867392   60215 addons.go:243] addon default-storageclass should already be in state true
	I1216 21:05:24.867419   60215 host.go:66] Checking if "embed-certs-606219" exists ...
	I1216 21:05:24.867758   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:05:24.867801   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:05:24.884243   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35999
	I1216 21:05:24.884680   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:05:24.885282   60215 main.go:141] libmachine: Using API Version  1
	I1216 21:05:24.885304   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:05:24.885380   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36799
	I1216 21:05:24.885657   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:05:24.885730   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:05:24.885934   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetState
	I1216 21:05:24.886191   60215 main.go:141] libmachine: Using API Version  1
	I1216 21:05:24.886202   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:05:24.886473   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:05:24.886831   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:05:24.886853   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:05:24.887935   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:05:24.890092   60215 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1216 21:05:24.891395   60215 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1216 21:05:24.891413   60215 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1216 21:05:24.891441   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:05:24.894367   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46739
	I1216 21:05:24.894926   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:05:24.895551   60215 main.go:141] libmachine: Using API Version  1
	I1216 21:05:24.895570   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:05:24.895832   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:05:24.896148   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:05:24.896382   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetState
	I1216 21:05:24.896501   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:05:24.896523   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:05:24.897136   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:05:24.897327   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:05:24.897507   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:05:24.897673   60215 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 21:05:24.898101   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:05:24.900061   60215 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 21:05:24.901390   60215 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 21:05:24.901412   60215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 21:05:24.901432   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:05:24.904063   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:05:24.904403   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:05:24.904421   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:05:24.904617   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:05:24.904828   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:05:24.904969   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:05:24.905117   60215 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 21:05:24.907518   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32915
	I1216 21:05:24.907890   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:05:24.908349   60215 main.go:141] libmachine: Using API Version  1
	I1216 21:05:24.908362   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:05:24.908615   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:05:24.908793   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetState
	I1216 21:05:24.910349   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:05:24.910557   60215 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 21:05:24.910590   60215 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 21:05:24.910623   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:05:24.913163   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:05:24.913546   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:05:24.913628   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:05:24.913971   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:05:24.914156   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:05:24.914402   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:05:24.914562   60215 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 21:05:25.054773   60215 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 21:05:25.077692   60215 node_ready.go:35] waiting up to 6m0s for node "embed-certs-606219" to be "Ready" ...
	I1216 21:05:25.085592   60215 node_ready.go:49] node "embed-certs-606219" has status "Ready":"True"
	I1216 21:05:25.085618   60215 node_ready.go:38] duration metric: took 7.893359ms for node "embed-certs-606219" to be "Ready" ...
	I1216 21:05:25.085630   60215 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:05:25.092073   60215 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:25.160890   60215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 21:05:25.171950   60215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 21:05:25.174517   60215 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1216 21:05:25.174540   60215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1216 21:05:25.201386   60215 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1216 21:05:25.201415   60215 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1216 21:05:25.279568   60215 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 21:05:25.279599   60215 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1216 21:05:25.316528   60215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 21:05:25.944495   60215 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:25.944521   60215 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:25.944529   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Close
	I1216 21:05:25.944533   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Close
	I1216 21:05:25.944816   60215 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:25.944835   60215 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:25.944845   60215 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:25.944855   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Close
	I1216 21:05:25.944855   60215 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:25.944869   60215 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:25.944876   60215 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:25.944888   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Close
	I1216 21:05:25.944817   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Closing plugin on server side
	I1216 21:05:25.945069   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Closing plugin on server side
	I1216 21:05:25.945131   60215 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:25.945147   60215 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:25.945168   60215 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:25.945173   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Closing plugin on server side
	I1216 21:05:25.945218   60215 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:25.961427   60215 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:25.961449   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Close
	I1216 21:05:25.961729   60215 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:25.961743   60215 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:26.745600   60215 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.429029698s)
	I1216 21:05:26.745665   60215 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:26.745678   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Close
	I1216 21:05:26.746097   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Closing plugin on server side
	I1216 21:05:26.746115   60215 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:26.746128   60215 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:26.746142   60215 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:26.746151   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Close
	I1216 21:05:26.746429   60215 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:26.746446   60215 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:26.746457   60215 addons.go:475] Verifying addon metrics-server=true in "embed-certs-606219"
	I1216 21:05:26.746480   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Closing plugin on server side
	I1216 21:05:26.748859   60215 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1216 21:05:26.785021   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:05:26.785309   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:05:26.750502   60215 addons.go:510] duration metric: took 1.911885721s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1216 21:05:27.124629   60215 pod_ready.go:103] pod "etcd-embed-certs-606219" in "kube-system" namespace has status "Ready":"False"
	I1216 21:05:28.100607   60215 pod_ready.go:93] pod "etcd-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:28.100642   60215 pod_ready.go:82] duration metric: took 3.008540123s for pod "etcd-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:28.100654   60215 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:28.107620   60215 pod_ready.go:93] pod "kube-apiserver-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:28.107649   60215 pod_ready.go:82] duration metric: took 6.986126ms for pod "kube-apiserver-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:28.107661   60215 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:30.114012   60215 pod_ready.go:103] pod "kube-controller-manager-embed-certs-606219" in "kube-system" namespace has status "Ready":"False"
	I1216 21:05:31.116704   60215 pod_ready.go:93] pod "kube-controller-manager-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:31.116738   60215 pod_ready.go:82] duration metric: took 3.009069732s for pod "kube-controller-manager-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:31.116752   60215 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:31.122043   60215 pod_ready.go:93] pod "kube-scheduler-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:31.122079   60215 pod_ready.go:82] duration metric: took 5.318248ms for pod "kube-scheduler-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:31.122089   60215 pod_ready.go:39] duration metric: took 6.036446164s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:05:31.122107   60215 api_server.go:52] waiting for apiserver process to appear ...
	I1216 21:05:31.122167   60215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:05:31.140854   60215 api_server.go:72] duration metric: took 6.302233923s to wait for apiserver process to appear ...
	I1216 21:05:31.140887   60215 api_server.go:88] waiting for apiserver healthz status ...
	I1216 21:05:31.140910   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:05:31.146080   60215 api_server.go:279] https://192.168.61.151:8443/healthz returned 200:
	ok
	I1216 21:05:31.147076   60215 api_server.go:141] control plane version: v1.32.0
	I1216 21:05:31.147107   60215 api_server.go:131] duration metric: took 6.2056ms to wait for apiserver health ...
	I1216 21:05:31.147115   60215 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 21:05:31.152598   60215 system_pods.go:59] 9 kube-system pods found
	I1216 21:05:31.152627   60215 system_pods.go:61] "coredns-668d6bf9bc-5c74p" [ef8e73b6-150f-47cc-9df9-dcf983e5bd6e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:05:31.152634   60215 system_pods.go:61] "coredns-668d6bf9bc-xhdlz" [c1b5b585-f005-4885-9809-60f60e03bf04] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:05:31.152640   60215 system_pods.go:61] "etcd-embed-certs-606219" [f5595ee4-23f3-4227-8e25-8679fd2dc722] Running
	I1216 21:05:31.152643   60215 system_pods.go:61] "kube-apiserver-embed-certs-606219" [be11ba17-ecee-47c1-a4bd-329e0e705369] Running
	I1216 21:05:31.152647   60215 system_pods.go:61] "kube-controller-manager-embed-certs-606219" [21210597-d4d5-4cab-9a24-2d9f702f682d] Running
	I1216 21:05:31.152652   60215 system_pods.go:61] "kube-proxy-677x9" [37810520-4f02-46c4-8eeb-6dc70c859e3e] Running
	I1216 21:05:31.152655   60215 system_pods.go:61] "kube-scheduler-embed-certs-606219" [5a39f42d-b727-4acd-bd39-ae1c56a5b725] Running
	I1216 21:05:31.152659   60215 system_pods.go:61] "metrics-server-f79f97bbb-6fxnl" [828f2925-402c-4f49-89e1-354e082c0de4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:05:31.152662   60215 system_pods.go:61] "storage-provisioner" [6437bd61-690b-498d-b35c-e2ef4eb5be97] Running
	I1216 21:05:31.152669   60215 system_pods.go:74] duration metric: took 5.548798ms to wait for pod list to return data ...
	I1216 21:05:31.152675   60215 default_sa.go:34] waiting for default service account to be created ...
	I1216 21:05:31.155444   60215 default_sa.go:45] found service account: "default"
	I1216 21:05:31.155469   60215 default_sa.go:55] duration metric: took 2.788897ms for default service account to be created ...
	I1216 21:05:31.155477   60215 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 21:05:31.160520   60215 system_pods.go:86] 9 kube-system pods found
	I1216 21:05:31.160548   60215 system_pods.go:89] "coredns-668d6bf9bc-5c74p" [ef8e73b6-150f-47cc-9df9-dcf983e5bd6e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:05:31.160555   60215 system_pods.go:89] "coredns-668d6bf9bc-xhdlz" [c1b5b585-f005-4885-9809-60f60e03bf04] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:05:31.160561   60215 system_pods.go:89] "etcd-embed-certs-606219" [f5595ee4-23f3-4227-8e25-8679fd2dc722] Running
	I1216 21:05:31.160565   60215 system_pods.go:89] "kube-apiserver-embed-certs-606219" [be11ba17-ecee-47c1-a4bd-329e0e705369] Running
	I1216 21:05:31.160569   60215 system_pods.go:89] "kube-controller-manager-embed-certs-606219" [21210597-d4d5-4cab-9a24-2d9f702f682d] Running
	I1216 21:05:31.160573   60215 system_pods.go:89] "kube-proxy-677x9" [37810520-4f02-46c4-8eeb-6dc70c859e3e] Running
	I1216 21:05:31.160576   60215 system_pods.go:89] "kube-scheduler-embed-certs-606219" [5a39f42d-b727-4acd-bd39-ae1c56a5b725] Running
	I1216 21:05:31.160580   60215 system_pods.go:89] "metrics-server-f79f97bbb-6fxnl" [828f2925-402c-4f49-89e1-354e082c0de4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:05:31.160584   60215 system_pods.go:89] "storage-provisioner" [6437bd61-690b-498d-b35c-e2ef4eb5be97] Running
	I1216 21:05:31.160591   60215 system_pods.go:126] duration metric: took 5.109359ms to wait for k8s-apps to be running ...
	I1216 21:05:31.160597   60215 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 21:05:31.160637   60215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:05:31.177182   60215 system_svc.go:56] duration metric: took 16.575484ms WaitForService to wait for kubelet
	I1216 21:05:31.177216   60215 kubeadm.go:582] duration metric: took 6.33860089s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 21:05:31.177239   60215 node_conditions.go:102] verifying NodePressure condition ...
	I1216 21:05:31.180614   60215 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 21:05:31.180635   60215 node_conditions.go:123] node cpu capacity is 2
	I1216 21:05:31.180645   60215 node_conditions.go:105] duration metric: took 3.400617ms to run NodePressure ...
	I1216 21:05:31.180656   60215 start.go:241] waiting for startup goroutines ...
	I1216 21:05:31.180667   60215 start.go:246] waiting for cluster config update ...
	I1216 21:05:31.180684   60215 start.go:255] writing updated cluster config ...
	I1216 21:05:31.180960   60215 ssh_runner.go:195] Run: rm -f paused
	I1216 21:05:31.232404   60215 start.go:600] kubectl: 1.32.0, cluster: 1.32.0 (minor skew: 0)
	I1216 21:05:31.234366   60215 out.go:177] * Done! kubectl is now configured to use "embed-certs-606219" cluster and "default" namespace by default
	I1216 21:06:06.787417   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:06:06.787673   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:06:06.787700   60933 kubeadm.go:310] 
	I1216 21:06:06.787779   60933 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1216 21:06:06.787849   60933 kubeadm.go:310] 		timed out waiting for the condition
	I1216 21:06:06.787864   60933 kubeadm.go:310] 
	I1216 21:06:06.787894   60933 kubeadm.go:310] 	This error is likely caused by:
	I1216 21:06:06.787944   60933 kubeadm.go:310] 		- The kubelet is not running
	I1216 21:06:06.788115   60933 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 21:06:06.788131   60933 kubeadm.go:310] 
	I1216 21:06:06.788238   60933 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 21:06:06.788270   60933 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1216 21:06:06.788328   60933 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1216 21:06:06.788346   60933 kubeadm.go:310] 
	I1216 21:06:06.788492   60933 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1216 21:06:06.788568   60933 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1216 21:06:06.788575   60933 kubeadm.go:310] 
	I1216 21:06:06.788706   60933 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1216 21:06:06.788914   60933 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1216 21:06:06.789052   60933 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1216 21:06:06.789150   60933 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1216 21:06:06.789160   60933 kubeadm.go:310] 
	I1216 21:06:06.789970   60933 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 21:06:06.790084   60933 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1216 21:06:06.790222   60933 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1216 21:06:06.790376   60933 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 21:06:06.790430   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 21:06:07.272336   60933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:06:07.288881   60933 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:06:07.303411   60933 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:06:07.303437   60933 kubeadm.go:157] found existing configuration files:
	
	I1216 21:06:07.303486   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 21:06:07.314605   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:06:07.314675   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:06:07.326523   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 21:06:07.336506   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:06:07.336587   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:06:07.347505   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 21:06:07.357743   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:06:07.357799   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:06:07.368251   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 21:06:07.378296   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:06:07.378366   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:06:07.390625   60933 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 21:06:07.461800   60933 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1216 21:06:07.461911   60933 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 21:06:07.607467   60933 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 21:06:07.607664   60933 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 21:06:07.607821   60933 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1216 21:06:07.821429   60933 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 21:06:07.823617   60933 out.go:235]   - Generating certificates and keys ...
	I1216 21:06:07.823728   60933 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 21:06:07.823826   60933 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 21:06:07.823970   60933 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 21:06:07.824066   60933 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 21:06:07.824191   60933 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 21:06:07.824281   60933 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 21:06:07.824374   60933 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 21:06:07.824452   60933 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 21:06:07.824529   60933 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 21:06:07.824634   60933 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 21:06:07.824728   60933 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 21:06:07.824826   60933 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 21:06:08.070481   60933 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 21:06:08.416182   60933 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 21:06:08.472848   60933 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 21:06:08.528700   60933 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 21:06:08.551528   60933 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 21:06:08.552215   60933 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 21:06:08.552299   60933 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 21:06:08.702187   60933 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 21:06:08.704170   60933 out.go:235]   - Booting up control plane ...
	I1216 21:06:08.704286   60933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 21:06:08.721205   60933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 21:06:08.722619   60933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 21:06:08.724289   60933 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 21:06:08.726457   60933 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1216 21:06:48.729045   60933 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1216 21:06:48.729713   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:06:48.730028   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:06:53.730648   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:06:53.730870   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:07:03.731670   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:07:03.731904   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:07:23.733276   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:07:23.733489   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:08:03.734439   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:08:03.734730   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:08:03.734768   60933 kubeadm.go:310] 
	I1216 21:08:03.734831   60933 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1216 21:08:03.734902   60933 kubeadm.go:310] 		timed out waiting for the condition
	I1216 21:08:03.734917   60933 kubeadm.go:310] 
	I1216 21:08:03.734966   60933 kubeadm.go:310] 	This error is likely caused by:
	I1216 21:08:03.735003   60933 kubeadm.go:310] 		- The kubelet is not running
	I1216 21:08:03.735094   60933 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 21:08:03.735104   60933 kubeadm.go:310] 
	I1216 21:08:03.735260   60933 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 21:08:03.735325   60933 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1216 21:08:03.735353   60933 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1216 21:08:03.735359   60933 kubeadm.go:310] 
	I1216 21:08:03.735486   60933 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1216 21:08:03.735604   60933 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1216 21:08:03.735614   60933 kubeadm.go:310] 
	I1216 21:08:03.735757   60933 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1216 21:08:03.735880   60933 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1216 21:08:03.735986   60933 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1216 21:08:03.736096   60933 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1216 21:08:03.736107   60933 kubeadm.go:310] 
	I1216 21:08:03.736944   60933 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 21:08:03.737145   60933 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1216 21:08:03.737211   60933 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1216 21:08:03.737287   60933 kubeadm.go:394] duration metric: took 7m57.891196073s to StartCluster
	I1216 21:08:03.737346   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:08:03.737417   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:08:03.789377   60933 cri.go:89] found id: ""
	I1216 21:08:03.789412   60933 logs.go:282] 0 containers: []
	W1216 21:08:03.789421   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:08:03.789426   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:08:03.789477   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:08:03.831122   60933 cri.go:89] found id: ""
	I1216 21:08:03.831150   60933 logs.go:282] 0 containers: []
	W1216 21:08:03.831161   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:08:03.831167   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:08:03.831236   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:08:03.870598   60933 cri.go:89] found id: ""
	I1216 21:08:03.870625   60933 logs.go:282] 0 containers: []
	W1216 21:08:03.870634   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:08:03.870640   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:08:03.870695   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:08:03.909060   60933 cri.go:89] found id: ""
	I1216 21:08:03.909095   60933 logs.go:282] 0 containers: []
	W1216 21:08:03.909103   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:08:03.909109   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:08:03.909163   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:08:03.946925   60933 cri.go:89] found id: ""
	I1216 21:08:03.946954   60933 logs.go:282] 0 containers: []
	W1216 21:08:03.946962   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:08:03.946968   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:08:03.947038   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:08:03.985596   60933 cri.go:89] found id: ""
	I1216 21:08:03.985629   60933 logs.go:282] 0 containers: []
	W1216 21:08:03.985650   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:08:03.985670   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:08:03.985736   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:08:04.022504   60933 cri.go:89] found id: ""
	I1216 21:08:04.022530   60933 logs.go:282] 0 containers: []
	W1216 21:08:04.022538   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:08:04.022545   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:08:04.022608   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:08:04.075636   60933 cri.go:89] found id: ""
	I1216 21:08:04.075667   60933 logs.go:282] 0 containers: []
	W1216 21:08:04.075677   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:08:04.075688   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:08:04.075707   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:08:04.180622   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:08:04.180653   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:08:04.180671   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:08:04.308091   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:08:04.308146   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:08:04.353240   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:08:04.353294   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:08:04.407919   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:08:04.407955   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1216 21:08:04.423583   60933 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1216 21:08:04.423644   60933 out.go:270] * 
	W1216 21:08:04.423727   60933 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 21:08:04.423749   60933 out.go:270] * 
	W1216 21:08:04.424576   60933 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 21:08:04.428361   60933 out.go:201] 
	W1216 21:08:04.429839   60933 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 21:08:04.429919   60933 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 21:08:04.429958   60933 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 21:08:04.431619   60933 out.go:201] 
	
	
	==> CRI-O <==
	Dec 16 21:17:09 old-k8s-version-847766 crio[626]: time="2024-12-16 21:17:09.729872831Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383829729845935,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d9f9a5bf-3e17-40ce-9f6b-d4e3ff592e05 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:17:09 old-k8s-version-847766 crio[626]: time="2024-12-16 21:17:09.730515779Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b7ab5b57-1390-46b3-9b90-3795b6e1e755 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:17:09 old-k8s-version-847766 crio[626]: time="2024-12-16 21:17:09.730623743Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b7ab5b57-1390-46b3-9b90-3795b6e1e755 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:17:09 old-k8s-version-847766 crio[626]: time="2024-12-16 21:17:09.730660658Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b7ab5b57-1390-46b3-9b90-3795b6e1e755 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:17:09 old-k8s-version-847766 crio[626]: time="2024-12-16 21:17:09.768186154Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0b473719-01fd-478c-8450-c1cc562746c5 name=/runtime.v1.RuntimeService/Version
	Dec 16 21:17:09 old-k8s-version-847766 crio[626]: time="2024-12-16 21:17:09.768262566Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0b473719-01fd-478c-8450-c1cc562746c5 name=/runtime.v1.RuntimeService/Version
	Dec 16 21:17:09 old-k8s-version-847766 crio[626]: time="2024-12-16 21:17:09.769832140Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=73a96720-34c9-4a30-a7fc-4ff6d22f2700 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:17:09 old-k8s-version-847766 crio[626]: time="2024-12-16 21:17:09.770266631Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383829770218364,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=73a96720-34c9-4a30-a7fc-4ff6d22f2700 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:17:09 old-k8s-version-847766 crio[626]: time="2024-12-16 21:17:09.771229662Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dc09547d-8894-4722-8210-23b3a905e2fc name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:17:09 old-k8s-version-847766 crio[626]: time="2024-12-16 21:17:09.771292216Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dc09547d-8894-4722-8210-23b3a905e2fc name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:17:09 old-k8s-version-847766 crio[626]: time="2024-12-16 21:17:09.771337828Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=dc09547d-8894-4722-8210-23b3a905e2fc name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:17:09 old-k8s-version-847766 crio[626]: time="2024-12-16 21:17:09.810170265Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f942e597-d1d3-49f8-9222-c50fbacde4a2 name=/runtime.v1.RuntimeService/Version
	Dec 16 21:17:09 old-k8s-version-847766 crio[626]: time="2024-12-16 21:17:09.810253220Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f942e597-d1d3-49f8-9222-c50fbacde4a2 name=/runtime.v1.RuntimeService/Version
	Dec 16 21:17:09 old-k8s-version-847766 crio[626]: time="2024-12-16 21:17:09.812243743Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8b30ad49-20d1-4380-b92c-a1c7c37d0e43 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:17:09 old-k8s-version-847766 crio[626]: time="2024-12-16 21:17:09.812730686Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383829812697748,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8b30ad49-20d1-4380-b92c-a1c7c37d0e43 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:17:09 old-k8s-version-847766 crio[626]: time="2024-12-16 21:17:09.813442829Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a78e764d-a455-439b-ae97-16c185b7c679 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:17:09 old-k8s-version-847766 crio[626]: time="2024-12-16 21:17:09.813521922Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a78e764d-a455-439b-ae97-16c185b7c679 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:17:09 old-k8s-version-847766 crio[626]: time="2024-12-16 21:17:09.813561452Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a78e764d-a455-439b-ae97-16c185b7c679 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:17:09 old-k8s-version-847766 crio[626]: time="2024-12-16 21:17:09.850184829Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ef3f5f59-fc98-45b5-acef-32bb595e024f name=/runtime.v1.RuntimeService/Version
	Dec 16 21:17:09 old-k8s-version-847766 crio[626]: time="2024-12-16 21:17:09.850276095Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ef3f5f59-fc98-45b5-acef-32bb595e024f name=/runtime.v1.RuntimeService/Version
	Dec 16 21:17:09 old-k8s-version-847766 crio[626]: time="2024-12-16 21:17:09.851480809Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=927e160c-b58e-4ca2-b8c1-d9bb63630f59 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:17:09 old-k8s-version-847766 crio[626]: time="2024-12-16 21:17:09.851994551Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383829851961458,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=927e160c-b58e-4ca2-b8c1-d9bb63630f59 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:17:09 old-k8s-version-847766 crio[626]: time="2024-12-16 21:17:09.852652834Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7112fbca-e78f-4b5e-abfc-46ff24aa0376 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:17:09 old-k8s-version-847766 crio[626]: time="2024-12-16 21:17:09.852712205Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7112fbca-e78f-4b5e-abfc-46ff24aa0376 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:17:09 old-k8s-version-847766 crio[626]: time="2024-12-16 21:17:09.852748851Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7112fbca-e78f-4b5e-abfc-46ff24aa0376 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 20:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053004] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042792] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.137612] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.914611] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.669532] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.090745] systemd-fstab-generator[555]: Ignoring "noauto" option for root device
	[  +0.063238] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068057] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.211871] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.132194] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.273053] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[Dec16 21:00] systemd-fstab-generator[876]: Ignoring "noauto" option for root device
	[  +0.063116] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.314286] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[ +12.945441] kauditd_printk_skb: 46 callbacks suppressed
	[Dec16 21:04] systemd-fstab-generator[4991]: Ignoring "noauto" option for root device
	[Dec16 21:06] systemd-fstab-generator[5267]: Ignoring "noauto" option for root device
	[  +0.075796] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 21:17:10 up 17 min,  0 users,  load average: 0.00, 0.01, 0.03
	Linux old-k8s-version-847766 5.10.207 #1 SMP Thu Dec 12 23:38:00 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Dec 16 21:17:05 old-k8s-version-847766 kubelet[6447]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Dec 16 21:17:05 old-k8s-version-847766 kubelet[6447]: net/http.(*Transport).dialConnFor(0xc000791a40, 0xc00016dc30)
	Dec 16 21:17:05 old-k8s-version-847766 kubelet[6447]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Dec 16 21:17:05 old-k8s-version-847766 kubelet[6447]: created by net/http.(*Transport).queueForDial
	Dec 16 21:17:05 old-k8s-version-847766 kubelet[6447]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Dec 16 21:17:05 old-k8s-version-847766 kubelet[6447]: goroutine 155 [select]:
	Dec 16 21:17:05 old-k8s-version-847766 kubelet[6447]: net.(*netFD).connect.func2(0x4f7fe40, 0xc0002eb020, 0xc000b8b980, 0xc0007346c0, 0xc000734660)
	Dec 16 21:17:05 old-k8s-version-847766 kubelet[6447]:         /usr/local/go/src/net/fd_unix.go:118 +0xc5
	Dec 16 21:17:05 old-k8s-version-847766 kubelet[6447]: created by net.(*netFD).connect
	Dec 16 21:17:05 old-k8s-version-847766 kubelet[6447]:         /usr/local/go/src/net/fd_unix.go:117 +0x234
	Dec 16 21:17:05 old-k8s-version-847766 kubelet[6447]: goroutine 154 [runnable]:
	Dec 16 21:17:05 old-k8s-version-847766 kubelet[6447]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc0001808c0)
	Dec 16 21:17:05 old-k8s-version-847766 kubelet[6447]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1242
	Dec 16 21:17:05 old-k8s-version-847766 kubelet[6447]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Dec 16 21:17:05 old-k8s-version-847766 kubelet[6447]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Dec 16 21:17:05 old-k8s-version-847766 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Dec 16 21:17:05 old-k8s-version-847766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 21:17:05 old-k8s-version-847766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Dec 16 21:17:05 old-k8s-version-847766 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 16 21:17:05 old-k8s-version-847766 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 16 21:17:05 old-k8s-version-847766 kubelet[6456]: I1216 21:17:05.738417    6456 server.go:416] Version: v1.20.0
	Dec 16 21:17:05 old-k8s-version-847766 kubelet[6456]: I1216 21:17:05.738747    6456 server.go:837] Client rotation is on, will bootstrap in background
	Dec 16 21:17:05 old-k8s-version-847766 kubelet[6456]: I1216 21:17:05.740491    6456 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Dec 16 21:17:05 old-k8s-version-847766 kubelet[6456]: W1216 21:17:05.741417    6456 manager.go:159] Cannot detect current cgroup on cgroup v2
	Dec 16 21:17:05 old-k8s-version-847766 kubelet[6456]: I1216 21:17:05.741638    6456 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-847766 -n old-k8s-version-847766
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-847766 -n old-k8s-version-847766: exit status 2 (260.898677ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-847766" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (473.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:285: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-327790 -n default-k8s-diff-port-327790
start_stop_delete_test.go:285: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-12-16 21:21:52.098382517 +0000 UTC m=+6430.572030239
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-327790 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-327790 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.595µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-327790 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-327790 -n default-k8s-diff-port-327790
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-327790 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-327790 logs -n 25: (1.610218257s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-647112 sudo systemctl                        | auto-647112           | jenkins | v1.34.0 | 16 Dec 24 21:21 UTC | 16 Dec 24 21:21 UTC |
	|         | status kubelet --all --full                          |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-647112 sudo systemctl                        | auto-647112           | jenkins | v1.34.0 | 16 Dec 24 21:21 UTC | 16 Dec 24 21:21 UTC |
	|         | cat kubelet --no-pager                               |                       |         |         |                     |                     |
	| ssh     | -p auto-647112 sudo journalctl                       | auto-647112           | jenkins | v1.34.0 | 16 Dec 24 21:21 UTC | 16 Dec 24 21:21 UTC |
	|         | -xeu kubelet --all --full                            |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-647112 sudo cat                              | auto-647112           | jenkins | v1.34.0 | 16 Dec 24 21:21 UTC | 16 Dec 24 21:21 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p auto-647112 sudo cat                              | auto-647112           | jenkins | v1.34.0 | 16 Dec 24 21:21 UTC | 16 Dec 24 21:21 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p auto-647112 sudo systemctl                        | auto-647112           | jenkins | v1.34.0 | 16 Dec 24 21:21 UTC |                     |
	|         | status docker --all --full                           |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-647112 sudo systemctl                        | auto-647112           | jenkins | v1.34.0 | 16 Dec 24 21:21 UTC | 16 Dec 24 21:21 UTC |
	|         | cat docker --no-pager                                |                       |         |         |                     |                     |
	| ssh     | -p auto-647112 sudo cat                              | auto-647112           | jenkins | v1.34.0 | 16 Dec 24 21:21 UTC | 16 Dec 24 21:21 UTC |
	|         | /etc/docker/daemon.json                              |                       |         |         |                     |                     |
	| ssh     | -p auto-647112 sudo docker                           | auto-647112           | jenkins | v1.34.0 | 16 Dec 24 21:21 UTC |                     |
	|         | system info                                          |                       |         |         |                     |                     |
	| ssh     | -p auto-647112 sudo systemctl                        | auto-647112           | jenkins | v1.34.0 | 16 Dec 24 21:21 UTC |                     |
	|         | status cri-docker --all --full                       |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-647112 sudo systemctl                        | auto-647112           | jenkins | v1.34.0 | 16 Dec 24 21:21 UTC | 16 Dec 24 21:21 UTC |
	|         | cat cri-docker --no-pager                            |                       |         |         |                     |                     |
	| ssh     | -p auto-647112 sudo cat                              | auto-647112           | jenkins | v1.34.0 | 16 Dec 24 21:21 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p auto-647112 sudo cat                              | auto-647112           | jenkins | v1.34.0 | 16 Dec 24 21:21 UTC | 16 Dec 24 21:21 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p auto-647112 sudo                                  | auto-647112           | jenkins | v1.34.0 | 16 Dec 24 21:21 UTC | 16 Dec 24 21:21 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p auto-647112 sudo systemctl                        | auto-647112           | jenkins | v1.34.0 | 16 Dec 24 21:21 UTC |                     |
	|         | status containerd --all --full                       |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-647112 sudo systemctl                        | auto-647112           | jenkins | v1.34.0 | 16 Dec 24 21:21 UTC | 16 Dec 24 21:21 UTC |
	|         | cat containerd --no-pager                            |                       |         |         |                     |                     |
	| ssh     | -p auto-647112 sudo cat                              | auto-647112           | jenkins | v1.34.0 | 16 Dec 24 21:21 UTC | 16 Dec 24 21:21 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p auto-647112 sudo cat                              | auto-647112           | jenkins | v1.34.0 | 16 Dec 24 21:21 UTC | 16 Dec 24 21:21 UTC |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p auto-647112 sudo containerd                       | auto-647112           | jenkins | v1.34.0 | 16 Dec 24 21:21 UTC | 16 Dec 24 21:21 UTC |
	|         | config dump                                          |                       |         |         |                     |                     |
	| ssh     | -p auto-647112 sudo systemctl                        | auto-647112           | jenkins | v1.34.0 | 16 Dec 24 21:21 UTC | 16 Dec 24 21:21 UTC |
	|         | status crio --all --full                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-647112 sudo systemctl                        | auto-647112           | jenkins | v1.34.0 | 16 Dec 24 21:21 UTC | 16 Dec 24 21:21 UTC |
	|         | cat crio --no-pager                                  |                       |         |         |                     |                     |
	| ssh     | -p auto-647112 sudo find                             | auto-647112           | jenkins | v1.34.0 | 16 Dec 24 21:21 UTC | 16 Dec 24 21:21 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |         |                     |                     |
	| ssh     | -p auto-647112 sudo crio                             | auto-647112           | jenkins | v1.34.0 | 16 Dec 24 21:21 UTC | 16 Dec 24 21:21 UTC |
	|         | config                                               |                       |         |         |                     |                     |
	| delete  | -p auto-647112                                       | auto-647112           | jenkins | v1.34.0 | 16 Dec 24 21:21 UTC | 16 Dec 24 21:21 UTC |
	| start   | -p custom-flannel-647112                             | custom-flannel-647112 | jenkins | v1.34.0 | 16 Dec 24 21:21 UTC |                     |
	|         | --memory=3072 --alsologtostderr                      |                       |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                       |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                       |         |         |                     |                     |
	|         | --driver=kvm2                                        |                       |         |         |                     |                     |
	|         | --container-runtime=crio                             |                       |         |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 21:21:41
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 21:21:41.119961   71506 out.go:345] Setting OutFile to fd 1 ...
	I1216 21:21:41.120105   71506 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 21:21:41.120112   71506 out.go:358] Setting ErrFile to fd 2...
	I1216 21:21:41.120118   71506 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 21:21:41.120363   71506 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-7083/.minikube/bin
	I1216 21:21:41.121057   71506 out.go:352] Setting JSON to false
	I1216 21:21:41.122668   71506 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7446,"bootTime":1734376655,"procs":304,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 21:21:41.122870   71506 start.go:139] virtualization: kvm guest
	I1216 21:21:41.125681   71506 out.go:177] * [custom-flannel-647112] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1216 21:21:41.127533   71506 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 21:21:41.127557   71506 notify.go:220] Checking for updates...
	I1216 21:21:41.130417   71506 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 21:21:41.131966   71506 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 21:21:41.133442   71506 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-7083/.minikube
	I1216 21:21:41.134810   71506 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 21:21:41.136061   71506 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 21:21:41.137988   71506 config.go:182] Loaded profile config "calico-647112": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 21:21:41.138180   71506 config.go:182] Loaded profile config "default-k8s-diff-port-327790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 21:21:41.138351   71506 config.go:182] Loaded profile config "kindnet-647112": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 21:21:41.138493   71506 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 21:21:41.194718   71506 out.go:177] * Using the kvm2 driver based on user configuration
	I1216 21:21:41.196198   71506 start.go:297] selected driver: kvm2
	I1216 21:21:41.196223   71506 start.go:901] validating driver "kvm2" against <nil>
	I1216 21:21:41.196237   71506 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 21:21:41.197276   71506 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 21:21:41.197381   71506 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20091-7083/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1216 21:21:41.216832   71506 install.go:137] /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1216 21:21:41.216898   71506 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 21:21:41.217232   71506 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 21:21:41.217276   71506 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1216 21:21:41.217290   71506 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1216 21:21:41.217382   71506 start.go:340] cluster config:
	{Name:custom-flannel-647112 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:custom-flannel-647112 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 21:21:41.217529   71506 iso.go:125] acquiring lock: {Name:mk60ed2ba7ed00047edacd09f4f6bf84214f0831 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 21:21:41.220789   71506 out.go:177] * Starting "custom-flannel-647112" primary control-plane node in "custom-flannel-647112" cluster
	I1216 21:21:41.222138   71506 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 21:21:41.222205   71506 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1216 21:21:41.222220   71506 cache.go:56] Caching tarball of preloaded images
	I1216 21:21:41.222325   71506 preload.go:172] Found /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 21:21:41.222350   71506 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1216 21:21:41.222491   71506 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/custom-flannel-647112/config.json ...
	I1216 21:21:41.222528   71506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/custom-flannel-647112/config.json: {Name:mk7cf07b88ee82b06053a8b8022584ed11a7f9c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:21:41.222740   71506 start.go:360] acquireMachinesLock for custom-flannel-647112: {Name:mk014ce1133f8d018fee1f78c9c31a354da6dd77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 21:21:41.222798   71506 start.go:364] duration metric: took 29.348µs to acquireMachinesLock for "custom-flannel-647112"
	I1216 21:21:41.222820   71506 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-647112 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.32.0 ClusterName:custom-flannel-647112 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 21:21:41.222964   71506 start.go:125] createHost starting for "" (driver="kvm2")
	I1216 21:21:39.735231   69424 main.go:141] libmachine: (calico-647112) Calling .GetIP
	I1216 21:21:39.738046   69424 main.go:141] libmachine: (calico-647112) DBG | domain calico-647112 has defined MAC address 52:54:00:d9:e4:f9 in network mk-calico-647112
	I1216 21:21:39.738528   69424 main.go:141] libmachine: (calico-647112) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d9:e4:f9", ip: ""} in network mk-calico-647112: {Iface:virbr4 ExpiryTime:2024-12-16 22:21:26 +0000 UTC Type:0 Mac:52:54:00:d9:e4:f9 Iaid: IPaddr:192.168.72.190 Prefix:24 Hostname:calico-647112 Clientid:01:52:54:00:d9:e4:f9}
	I1216 21:21:39.738558   69424 main.go:141] libmachine: (calico-647112) DBG | domain calico-647112 has defined IP address 192.168.72.190 and MAC address 52:54:00:d9:e4:f9 in network mk-calico-647112
	I1216 21:21:39.738956   69424 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1216 21:21:39.744488   69424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 21:21:39.760737   69424 kubeadm.go:883] updating cluster {Name:calico-647112 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.
0 ClusterName:calico-647112 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.72.190 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 21:21:39.760876   69424 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 21:21:39.760950   69424 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 21:21:39.804563   69424 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1216 21:21:39.804641   69424 ssh_runner.go:195] Run: which lz4
	I1216 21:21:39.810005   69424 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 21:21:39.815934   69424 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 21:21:39.815968   69424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1216 21:21:41.498673   69424 crio.go:462] duration metric: took 1.688685634s to copy over tarball
	I1216 21:21:41.498769   69424 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 21:21:42.027314   69158 node_ready.go:53] node "kindnet-647112" has status "Ready":"False"
	I1216 21:21:44.027437   69158 node_ready.go:53] node "kindnet-647112" has status "Ready":"False"
	I1216 21:21:44.527519   69158 node_ready.go:49] node "kindnet-647112" has status "Ready":"True"
	I1216 21:21:44.527550   69158 node_ready.go:38] duration metric: took 12.004708686s for node "kindnet-647112" to be "Ready" ...
	I1216 21:21:44.527563   69158 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:21:44.535808   69158 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-swcxw" in "kube-system" namespace to be "Ready" ...
	I1216 21:21:41.224752   71506 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1216 21:21:41.224956   71506 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:21:41.225009   71506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:21:41.244005   71506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43511
	I1216 21:21:41.244593   71506 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:21:41.245289   71506 main.go:141] libmachine: Using API Version  1
	I1216 21:21:41.245318   71506 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:21:41.245770   71506 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:21:41.245981   71506 main.go:141] libmachine: (custom-flannel-647112) Calling .GetMachineName
	I1216 21:21:41.246167   71506 main.go:141] libmachine: (custom-flannel-647112) Calling .DriverName
	I1216 21:21:41.246346   71506 start.go:159] libmachine.API.Create for "custom-flannel-647112" (driver="kvm2")
	I1216 21:21:41.246385   71506 client.go:168] LocalClient.Create starting
	I1216 21:21:41.246427   71506 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem
	I1216 21:21:41.246484   71506 main.go:141] libmachine: Decoding PEM data...
	I1216 21:21:41.246508   71506 main.go:141] libmachine: Parsing certificate...
	I1216 21:21:41.246572   71506 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem
	I1216 21:21:41.246599   71506 main.go:141] libmachine: Decoding PEM data...
	I1216 21:21:41.246615   71506 main.go:141] libmachine: Parsing certificate...
	I1216 21:21:41.246649   71506 main.go:141] libmachine: Running pre-create checks...
	I1216 21:21:41.246679   71506 main.go:141] libmachine: (custom-flannel-647112) Calling .PreCreateCheck
	I1216 21:21:41.247096   71506 main.go:141] libmachine: (custom-flannel-647112) Calling .GetConfigRaw
	I1216 21:21:41.247586   71506 main.go:141] libmachine: Creating machine...
	I1216 21:21:41.247604   71506 main.go:141] libmachine: (custom-flannel-647112) Calling .Create
	I1216 21:21:41.247773   71506 main.go:141] libmachine: (custom-flannel-647112) Creating KVM machine...
	I1216 21:21:41.249599   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | found existing default KVM network
	I1216 21:21:41.251311   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | I1216 21:21:41.251106   71530 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:59:ca:f3} reservation:<nil>}
	I1216 21:21:41.252949   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | I1216 21:21:41.252837   71530 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010ffb0}
	I1216 21:21:41.252983   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | created network xml: 
	I1216 21:21:41.253001   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | <network>
	I1216 21:21:41.253014   71506 main.go:141] libmachine: (custom-flannel-647112) DBG |   <name>mk-custom-flannel-647112</name>
	I1216 21:21:41.253036   71506 main.go:141] libmachine: (custom-flannel-647112) DBG |   <dns enable='no'/>
	I1216 21:21:41.253048   71506 main.go:141] libmachine: (custom-flannel-647112) DBG |   
	I1216 21:21:41.253060   71506 main.go:141] libmachine: (custom-flannel-647112) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1216 21:21:41.253078   71506 main.go:141] libmachine: (custom-flannel-647112) DBG |     <dhcp>
	I1216 21:21:41.253091   71506 main.go:141] libmachine: (custom-flannel-647112) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1216 21:21:41.253101   71506 main.go:141] libmachine: (custom-flannel-647112) DBG |     </dhcp>
	I1216 21:21:41.253105   71506 main.go:141] libmachine: (custom-flannel-647112) DBG |   </ip>
	I1216 21:21:41.253110   71506 main.go:141] libmachine: (custom-flannel-647112) DBG |   
	I1216 21:21:41.253117   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | </network>
	I1216 21:21:41.253123   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | 
	I1216 21:21:41.259988   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | trying to create private KVM network mk-custom-flannel-647112 192.168.50.0/24...
	I1216 21:21:41.359362   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | private KVM network mk-custom-flannel-647112 192.168.50.0/24 created
	I1216 21:21:41.359393   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | I1216 21:21:41.359225   71530 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/20091-7083/.minikube
	I1216 21:21:41.359408   71506 main.go:141] libmachine: (custom-flannel-647112) Setting up store path in /home/jenkins/minikube-integration/20091-7083/.minikube/machines/custom-flannel-647112 ...
	I1216 21:21:41.359428   71506 main.go:141] libmachine: (custom-flannel-647112) Building disk image from file:///home/jenkins/minikube-integration/20091-7083/.minikube/cache/iso/amd64/minikube-v1.34.0-1734029574-20090-amd64.iso
	I1216 21:21:41.359445   71506 main.go:141] libmachine: (custom-flannel-647112) Downloading /home/jenkins/minikube-integration/20091-7083/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20091-7083/.minikube/cache/iso/amd64/minikube-v1.34.0-1734029574-20090-amd64.iso...
	I1216 21:21:41.678442   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | I1216 21:21:41.678302   71530 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/custom-flannel-647112/id_rsa...
	I1216 21:21:41.849843   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | I1216 21:21:41.849701   71530 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/custom-flannel-647112/custom-flannel-647112.rawdisk...
	I1216 21:21:41.849884   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | Writing magic tar header
	I1216 21:21:41.849896   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | Writing SSH key tar header
	I1216 21:21:41.849907   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | I1216 21:21:41.849839   71530 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/20091-7083/.minikube/machines/custom-flannel-647112 ...
	I1216 21:21:41.850063   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/custom-flannel-647112
	I1216 21:21:41.850100   71506 main.go:141] libmachine: (custom-flannel-647112) Setting executable bit set on /home/jenkins/minikube-integration/20091-7083/.minikube/machines/custom-flannel-647112 (perms=drwx------)
	I1216 21:21:41.850113   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20091-7083/.minikube/machines
	I1216 21:21:41.850131   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20091-7083/.minikube
	I1216 21:21:41.850141   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20091-7083
	I1216 21:21:41.850158   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1216 21:21:41.850170   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | Checking permissions on dir: /home/jenkins
	I1216 21:21:41.850186   71506 main.go:141] libmachine: (custom-flannel-647112) Setting executable bit set on /home/jenkins/minikube-integration/20091-7083/.minikube/machines (perms=drwxr-xr-x)
	I1216 21:21:41.850207   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | Checking permissions on dir: /home
	I1216 21:21:41.850222   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | Skipping /home - not owner
	I1216 21:21:41.850238   71506 main.go:141] libmachine: (custom-flannel-647112) Setting executable bit set on /home/jenkins/minikube-integration/20091-7083/.minikube (perms=drwxr-xr-x)
	I1216 21:21:41.850251   71506 main.go:141] libmachine: (custom-flannel-647112) Setting executable bit set on /home/jenkins/minikube-integration/20091-7083 (perms=drwxrwxr-x)
	I1216 21:21:41.850266   71506 main.go:141] libmachine: (custom-flannel-647112) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1216 21:21:41.850284   71506 main.go:141] libmachine: (custom-flannel-647112) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1216 21:21:41.850297   71506 main.go:141] libmachine: (custom-flannel-647112) Creating domain...
	I1216 21:21:41.851706   71506 main.go:141] libmachine: (custom-flannel-647112) define libvirt domain using xml: 
	I1216 21:21:41.851734   71506 main.go:141] libmachine: (custom-flannel-647112) <domain type='kvm'>
	I1216 21:21:41.851744   71506 main.go:141] libmachine: (custom-flannel-647112)   <name>custom-flannel-647112</name>
	I1216 21:21:41.851751   71506 main.go:141] libmachine: (custom-flannel-647112)   <memory unit='MiB'>3072</memory>
	I1216 21:21:41.851759   71506 main.go:141] libmachine: (custom-flannel-647112)   <vcpu>2</vcpu>
	I1216 21:21:41.851765   71506 main.go:141] libmachine: (custom-flannel-647112)   <features>
	I1216 21:21:41.851773   71506 main.go:141] libmachine: (custom-flannel-647112)     <acpi/>
	I1216 21:21:41.851791   71506 main.go:141] libmachine: (custom-flannel-647112)     <apic/>
	I1216 21:21:41.851837   71506 main.go:141] libmachine: (custom-flannel-647112)     <pae/>
	I1216 21:21:41.851856   71506 main.go:141] libmachine: (custom-flannel-647112)     
	I1216 21:21:41.851866   71506 main.go:141] libmachine: (custom-flannel-647112)   </features>
	I1216 21:21:41.851873   71506 main.go:141] libmachine: (custom-flannel-647112)   <cpu mode='host-passthrough'>
	I1216 21:21:41.851880   71506 main.go:141] libmachine: (custom-flannel-647112)   
	I1216 21:21:41.851885   71506 main.go:141] libmachine: (custom-flannel-647112)   </cpu>
	I1216 21:21:41.851893   71506 main.go:141] libmachine: (custom-flannel-647112)   <os>
	I1216 21:21:41.851904   71506 main.go:141] libmachine: (custom-flannel-647112)     <type>hvm</type>
	I1216 21:21:41.851916   71506 main.go:141] libmachine: (custom-flannel-647112)     <boot dev='cdrom'/>
	I1216 21:21:41.851923   71506 main.go:141] libmachine: (custom-flannel-647112)     <boot dev='hd'/>
	I1216 21:21:41.851935   71506 main.go:141] libmachine: (custom-flannel-647112)     <bootmenu enable='no'/>
	I1216 21:21:41.851942   71506 main.go:141] libmachine: (custom-flannel-647112)   </os>
	I1216 21:21:41.851951   71506 main.go:141] libmachine: (custom-flannel-647112)   <devices>
	I1216 21:21:41.851961   71506 main.go:141] libmachine: (custom-flannel-647112)     <disk type='file' device='cdrom'>
	I1216 21:21:41.851980   71506 main.go:141] libmachine: (custom-flannel-647112)       <source file='/home/jenkins/minikube-integration/20091-7083/.minikube/machines/custom-flannel-647112/boot2docker.iso'/>
	I1216 21:21:41.851995   71506 main.go:141] libmachine: (custom-flannel-647112)       <target dev='hdc' bus='scsi'/>
	I1216 21:21:41.852004   71506 main.go:141] libmachine: (custom-flannel-647112)       <readonly/>
	I1216 21:21:41.852020   71506 main.go:141] libmachine: (custom-flannel-647112)     </disk>
	I1216 21:21:41.852033   71506 main.go:141] libmachine: (custom-flannel-647112)     <disk type='file' device='disk'>
	I1216 21:21:41.852042   71506 main.go:141] libmachine: (custom-flannel-647112)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1216 21:21:41.852071   71506 main.go:141] libmachine: (custom-flannel-647112)       <source file='/home/jenkins/minikube-integration/20091-7083/.minikube/machines/custom-flannel-647112/custom-flannel-647112.rawdisk'/>
	I1216 21:21:41.852088   71506 main.go:141] libmachine: (custom-flannel-647112)       <target dev='hda' bus='virtio'/>
	I1216 21:21:41.852100   71506 main.go:141] libmachine: (custom-flannel-647112)     </disk>
	I1216 21:21:41.852120   71506 main.go:141] libmachine: (custom-flannel-647112)     <interface type='network'>
	I1216 21:21:41.852134   71506 main.go:141] libmachine: (custom-flannel-647112)       <source network='mk-custom-flannel-647112'/>
	I1216 21:21:41.852141   71506 main.go:141] libmachine: (custom-flannel-647112)       <model type='virtio'/>
	I1216 21:21:41.852152   71506 main.go:141] libmachine: (custom-flannel-647112)     </interface>
	I1216 21:21:41.852164   71506 main.go:141] libmachine: (custom-flannel-647112)     <interface type='network'>
	I1216 21:21:41.852177   71506 main.go:141] libmachine: (custom-flannel-647112)       <source network='default'/>
	I1216 21:21:41.852185   71506 main.go:141] libmachine: (custom-flannel-647112)       <model type='virtio'/>
	I1216 21:21:41.852195   71506 main.go:141] libmachine: (custom-flannel-647112)     </interface>
	I1216 21:21:41.852202   71506 main.go:141] libmachine: (custom-flannel-647112)     <serial type='pty'>
	I1216 21:21:41.852214   71506 main.go:141] libmachine: (custom-flannel-647112)       <target port='0'/>
	I1216 21:21:41.852221   71506 main.go:141] libmachine: (custom-flannel-647112)     </serial>
	I1216 21:21:41.852233   71506 main.go:141] libmachine: (custom-flannel-647112)     <console type='pty'>
	I1216 21:21:41.852251   71506 main.go:141] libmachine: (custom-flannel-647112)       <target type='serial' port='0'/>
	I1216 21:21:41.852262   71506 main.go:141] libmachine: (custom-flannel-647112)     </console>
	I1216 21:21:41.852269   71506 main.go:141] libmachine: (custom-flannel-647112)     <rng model='virtio'>
	I1216 21:21:41.852279   71506 main.go:141] libmachine: (custom-flannel-647112)       <backend model='random'>/dev/random</backend>
	I1216 21:21:41.852286   71506 main.go:141] libmachine: (custom-flannel-647112)     </rng>
	I1216 21:21:41.852296   71506 main.go:141] libmachine: (custom-flannel-647112)     
	I1216 21:21:41.852303   71506 main.go:141] libmachine: (custom-flannel-647112)     
	I1216 21:21:41.852311   71506 main.go:141] libmachine: (custom-flannel-647112)   </devices>
	I1216 21:21:41.852316   71506 main.go:141] libmachine: (custom-flannel-647112) </domain>
	I1216 21:21:41.852364   71506 main.go:141] libmachine: (custom-flannel-647112) 
	I1216 21:21:41.917526   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | domain custom-flannel-647112 has defined MAC address 52:54:00:c6:bd:18 in network default
	I1216 21:21:41.918403   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | domain custom-flannel-647112 has defined MAC address 52:54:00:1c:4d:2a in network mk-custom-flannel-647112
	I1216 21:21:41.918430   71506 main.go:141] libmachine: (custom-flannel-647112) Ensuring networks are active...
	I1216 21:21:41.919375   71506 main.go:141] libmachine: (custom-flannel-647112) Ensuring network default is active
	I1216 21:21:41.919861   71506 main.go:141] libmachine: (custom-flannel-647112) Ensuring network mk-custom-flannel-647112 is active
	I1216 21:21:41.920734   71506 main.go:141] libmachine: (custom-flannel-647112) Getting domain xml...
	I1216 21:21:41.921791   71506 main.go:141] libmachine: (custom-flannel-647112) Creating domain...
	I1216 21:21:43.565253   71506 main.go:141] libmachine: (custom-flannel-647112) Waiting to get IP...
	I1216 21:21:43.566095   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | domain custom-flannel-647112 has defined MAC address 52:54:00:1c:4d:2a in network mk-custom-flannel-647112
	I1216 21:21:43.566615   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | unable to find current IP address of domain custom-flannel-647112 in network mk-custom-flannel-647112
	I1216 21:21:43.566643   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | I1216 21:21:43.566587   71530 retry.go:31] will retry after 217.896201ms: waiting for machine to come up
	I1216 21:21:43.786415   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | domain custom-flannel-647112 has defined MAC address 52:54:00:1c:4d:2a in network mk-custom-flannel-647112
	I1216 21:21:43.786946   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | unable to find current IP address of domain custom-flannel-647112 in network mk-custom-flannel-647112
	I1216 21:21:43.786975   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | I1216 21:21:43.786905   71530 retry.go:31] will retry after 283.881124ms: waiting for machine to come up
	I1216 21:21:44.072718   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | domain custom-flannel-647112 has defined MAC address 52:54:00:1c:4d:2a in network mk-custom-flannel-647112
	I1216 21:21:44.073365   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | unable to find current IP address of domain custom-flannel-647112 in network mk-custom-flannel-647112
	I1216 21:21:44.073389   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | I1216 21:21:44.073325   71530 retry.go:31] will retry after 449.362154ms: waiting for machine to come up
	I1216 21:21:44.523954   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | domain custom-flannel-647112 has defined MAC address 52:54:00:1c:4d:2a in network mk-custom-flannel-647112
	I1216 21:21:44.524623   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | unable to find current IP address of domain custom-flannel-647112 in network mk-custom-flannel-647112
	I1216 21:21:44.524646   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | I1216 21:21:44.524557   71530 retry.go:31] will retry after 498.296212ms: waiting for machine to come up
	I1216 21:21:45.024333   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | domain custom-flannel-647112 has defined MAC address 52:54:00:1c:4d:2a in network mk-custom-flannel-647112
	I1216 21:21:45.024774   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | unable to find current IP address of domain custom-flannel-647112 in network mk-custom-flannel-647112
	I1216 21:21:45.024821   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | I1216 21:21:45.024760   71530 retry.go:31] will retry after 506.835153ms: waiting for machine to come up
	I1216 21:21:45.533637   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | domain custom-flannel-647112 has defined MAC address 52:54:00:1c:4d:2a in network mk-custom-flannel-647112
	I1216 21:21:45.534234   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | unable to find current IP address of domain custom-flannel-647112 in network mk-custom-flannel-647112
	I1216 21:21:45.534336   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | I1216 21:21:45.534188   71530 retry.go:31] will retry after 931.636353ms: waiting for machine to come up
	I1216 21:21:44.200868   69424 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.702064188s)
	I1216 21:21:44.200899   69424 crio.go:469] duration metric: took 2.702187657s to extract the tarball
	I1216 21:21:44.200906   69424 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 21:21:44.242467   69424 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 21:21:44.296386   69424 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 21:21:44.296420   69424 cache_images.go:84] Images are preloaded, skipping loading
	I1216 21:21:44.296435   69424 kubeadm.go:934] updating node { 192.168.72.190 8443 v1.32.0 crio true true} ...
	I1216 21:21:44.296556   69424 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-647112 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.190
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:calico-647112 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1216 21:21:44.296627   69424 ssh_runner.go:195] Run: crio config
	I1216 21:21:44.349808   69424 cni.go:84] Creating CNI manager for "calico"
	I1216 21:21:44.349834   69424 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 21:21:44.349874   69424 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.190 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-647112 NodeName:calico-647112 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.190"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.190 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 21:21:44.350000   69424 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.190
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-647112"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.190"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.190"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 21:21:44.350073   69424 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1216 21:21:44.363731   69424 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 21:21:44.363812   69424 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 21:21:44.376680   69424 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1216 21:21:44.399968   69424 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 21:21:44.422968   69424 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1216 21:21:44.445414   69424 ssh_runner.go:195] Run: grep 192.168.72.190	control-plane.minikube.internal$ /etc/hosts
	I1216 21:21:44.450122   69424 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.190	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 21:21:44.465215   69424 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 21:21:44.612542   69424 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 21:21:44.631933   69424 certs.go:68] Setting up /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/calico-647112 for IP: 192.168.72.190
	I1216 21:21:44.631964   69424 certs.go:194] generating shared ca certs ...
	I1216 21:21:44.631988   69424 certs.go:226] acquiring lock for ca certs: {Name:mk7f8f83a04be3d39897a025f51d4d8228b5a509 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:21:44.632200   69424 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key
	I1216 21:21:44.632264   69424 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key
	I1216 21:21:44.632280   69424 certs.go:256] generating profile certs ...
	I1216 21:21:44.632356   69424 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/calico-647112/client.key
	I1216 21:21:44.632385   69424 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/calico-647112/client.crt with IP's: []
	I1216 21:21:44.898117   69424 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/calico-647112/client.crt ...
	I1216 21:21:44.898145   69424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/calico-647112/client.crt: {Name:mk26ea353a163da7356b18849b201e43b43d1862 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:21:44.907915   69424 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/calico-647112/client.key ...
	I1216 21:21:44.907957   69424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/calico-647112/client.key: {Name:mk5d9f03a99d88c81cc43a3707d0f8a1d0ebbfc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:21:44.908132   69424 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/calico-647112/apiserver.key.556874f4
	I1216 21:21:44.908154   69424 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/calico-647112/apiserver.crt.556874f4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.190]
	I1216 21:21:45.101938   69424 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/calico-647112/apiserver.crt.556874f4 ...
	I1216 21:21:45.101964   69424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/calico-647112/apiserver.crt.556874f4: {Name:mk148a846ed074b06b34096f72a0e8a8596fe064 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:21:45.102145   69424 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/calico-647112/apiserver.key.556874f4 ...
	I1216 21:21:45.102165   69424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/calico-647112/apiserver.key.556874f4: {Name:mk7ad5125d3b01f12889368dea8f87119e8fc116 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:21:45.102273   69424 certs.go:381] copying /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/calico-647112/apiserver.crt.556874f4 -> /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/calico-647112/apiserver.crt
	I1216 21:21:45.102381   69424 certs.go:385] copying /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/calico-647112/apiserver.key.556874f4 -> /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/calico-647112/apiserver.key
	I1216 21:21:45.102456   69424 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/calico-647112/proxy-client.key
	I1216 21:21:45.102482   69424 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/calico-647112/proxy-client.crt with IP's: []
	I1216 21:21:45.258090   69424 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/calico-647112/proxy-client.crt ...
	I1216 21:21:45.258120   69424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/calico-647112/proxy-client.crt: {Name:mk912ca5ac6a4c9ca72ebd5ff55d61af936041be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:21:45.258311   69424 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/calico-647112/proxy-client.key ...
	I1216 21:21:45.258326   69424 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/calico-647112/proxy-client.key: {Name:mk8242062066487d4fab0d042a46da2e7020afad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:21:45.258546   69424 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem (1338 bytes)
	W1216 21:21:45.258585   69424 certs.go:480] ignoring /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254_empty.pem, impossibly tiny 0 bytes
	I1216 21:21:45.258600   69424 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 21:21:45.258636   69424 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem (1082 bytes)
	I1216 21:21:45.258667   69424 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem (1123 bytes)
	I1216 21:21:45.258700   69424 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem (1679 bytes)
	I1216 21:21:45.258757   69424 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem (1708 bytes)
	I1216 21:21:45.259398   69424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 21:21:45.293251   69424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 21:21:45.326491   69424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 21:21:45.383732   69424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 21:21:45.412938   69424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/calico-647112/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 21:21:45.518105   69424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/calico-647112/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 21:21:45.552099   69424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/calico-647112/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 21:21:45.650968   69424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/calico-647112/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 21:21:45.681633   69424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 21:21:45.729322   69424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem --> /usr/share/ca-certificates/14254.pem (1338 bytes)
	I1216 21:21:45.768892   69424 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /usr/share/ca-certificates/142542.pem (1708 bytes)
	I1216 21:21:45.803920   69424 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 21:21:45.829328   69424 ssh_runner.go:195] Run: openssl version
	I1216 21:21:45.836161   69424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142542.pem && ln -fs /usr/share/ca-certificates/142542.pem /etc/ssl/certs/142542.pem"
	I1216 21:21:45.849258   69424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142542.pem
	I1216 21:21:45.854606   69424 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 19:42 /usr/share/ca-certificates/142542.pem
	I1216 21:21:45.854661   69424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142542.pem
	I1216 21:21:45.861865   69424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142542.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 21:21:45.875585   69424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 21:21:45.890724   69424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:21:45.899252   69424 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:21:45.899347   69424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:21:45.905939   69424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 21:21:45.922491   69424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14254.pem && ln -fs /usr/share/ca-certificates/14254.pem /etc/ssl/certs/14254.pem"
	I1216 21:21:45.936708   69424 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14254.pem
	I1216 21:21:45.942766   69424 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 19:42 /usr/share/ca-certificates/14254.pem
	I1216 21:21:45.942839   69424 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14254.pem
	I1216 21:21:45.949685   69424 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14254.pem /etc/ssl/certs/51391683.0"
	I1216 21:21:45.963460   69424 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 21:21:45.968197   69424 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 21:21:45.968263   69424 kubeadm.go:392] StartCluster: {Name:calico-647112 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 C
lusterName:calico-647112 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.72.190 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 21:21:45.968365   69424 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 21:21:45.968431   69424 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 21:21:46.021387   69424 cri.go:89] found id: ""
	I1216 21:21:46.021462   69424 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 21:21:46.039150   69424 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 21:21:46.068485   69424 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:21:46.080982   69424 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:21:46.081012   69424 kubeadm.go:157] found existing configuration files:
	
	I1216 21:21:46.081077   69424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 21:21:46.091972   69424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:21:46.092071   69424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:21:46.102868   69424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 21:21:46.112878   69424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:21:46.112950   69424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:21:46.125616   69424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 21:21:46.137718   69424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:21:46.137797   69424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:21:46.151435   69424 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 21:21:46.162088   69424 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:21:46.162166   69424 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:21:46.174058   69424 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 21:21:46.351895   69424 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 21:21:46.828638   69158 pod_ready.go:103] pod "coredns-668d6bf9bc-swcxw" in "kube-system" namespace has status "Ready":"False"
	I1216 21:21:48.045056   69158 pod_ready.go:93] pod "coredns-668d6bf9bc-swcxw" in "kube-system" namespace has status "Ready":"True"
	I1216 21:21:48.045095   69158 pod_ready.go:82] duration metric: took 3.509254078s for pod "coredns-668d6bf9bc-swcxw" in "kube-system" namespace to be "Ready" ...
	I1216 21:21:48.045110   69158 pod_ready.go:79] waiting up to 15m0s for pod "etcd-kindnet-647112" in "kube-system" namespace to be "Ready" ...
	I1216 21:21:48.051036   69158 pod_ready.go:93] pod "etcd-kindnet-647112" in "kube-system" namespace has status "Ready":"True"
	I1216 21:21:48.051063   69158 pod_ready.go:82] duration metric: took 5.944578ms for pod "etcd-kindnet-647112" in "kube-system" namespace to be "Ready" ...
	I1216 21:21:48.051079   69158 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-kindnet-647112" in "kube-system" namespace to be "Ready" ...
	I1216 21:21:48.056422   69158 pod_ready.go:93] pod "kube-apiserver-kindnet-647112" in "kube-system" namespace has status "Ready":"True"
	I1216 21:21:48.056451   69158 pod_ready.go:82] duration metric: took 5.363878ms for pod "kube-apiserver-kindnet-647112" in "kube-system" namespace to be "Ready" ...
	I1216 21:21:48.056461   69158 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-kindnet-647112" in "kube-system" namespace to be "Ready" ...
	I1216 21:21:48.062332   69158 pod_ready.go:93] pod "kube-controller-manager-kindnet-647112" in "kube-system" namespace has status "Ready":"True"
	I1216 21:21:48.062360   69158 pod_ready.go:82] duration metric: took 5.890939ms for pod "kube-controller-manager-kindnet-647112" in "kube-system" namespace to be "Ready" ...
	I1216 21:21:48.062373   69158 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-wmg6p" in "kube-system" namespace to be "Ready" ...
	I1216 21:21:48.221066   69158 pod_ready.go:93] pod "kube-proxy-wmg6p" in "kube-system" namespace has status "Ready":"True"
	I1216 21:21:48.221090   69158 pod_ready.go:82] duration metric: took 158.710305ms for pod "kube-proxy-wmg6p" in "kube-system" namespace to be "Ready" ...
	I1216 21:21:48.221106   69158 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-kindnet-647112" in "kube-system" namespace to be "Ready" ...
	I1216 21:21:48.620972   69158 pod_ready.go:93] pod "kube-scheduler-kindnet-647112" in "kube-system" namespace has status "Ready":"True"
	I1216 21:21:48.621002   69158 pod_ready.go:82] duration metric: took 399.888738ms for pod "kube-scheduler-kindnet-647112" in "kube-system" namespace to be "Ready" ...
	I1216 21:21:48.621017   69158 pod_ready.go:39] duration metric: took 4.093410864s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:21:48.621036   69158 api_server.go:52] waiting for apiserver process to appear ...
	I1216 21:21:48.621100   69158 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:21:48.637323   69158 api_server.go:72] duration metric: took 17.276839326s to wait for apiserver process to appear ...
	I1216 21:21:48.637360   69158 api_server.go:88] waiting for apiserver healthz status ...
	I1216 21:21:48.637384   69158 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I1216 21:21:48.642999   69158 api_server.go:279] https://192.168.61.201:8443/healthz returned 200:
	ok
	I1216 21:21:48.644085   69158 api_server.go:141] control plane version: v1.32.0
	I1216 21:21:48.644107   69158 api_server.go:131] duration metric: took 6.740181ms to wait for apiserver health ...
	I1216 21:21:48.644115   69158 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 21:21:48.825664   69158 system_pods.go:59] 8 kube-system pods found
	I1216 21:21:48.825712   69158 system_pods.go:61] "coredns-668d6bf9bc-swcxw" [51d5e1c1-a13c-4f84-a0a2-1596f602c617] Running
	I1216 21:21:48.825720   69158 system_pods.go:61] "etcd-kindnet-647112" [e7c0a41e-8aea-4f8b-9a74-2adfaff6d7f0] Running
	I1216 21:21:48.825726   69158 system_pods.go:61] "kindnet-sq6fd" [5e9c8805-ef32-4a3c-b8bd-cb208ebf3f92] Running
	I1216 21:21:48.825731   69158 system_pods.go:61] "kube-apiserver-kindnet-647112" [2d1417e8-c2c3-43fd-ba01-9e9efbbfbdc7] Running
	I1216 21:21:48.825736   69158 system_pods.go:61] "kube-controller-manager-kindnet-647112" [2b45fe8f-b529-4948-859b-7bb39e23cdb3] Running
	I1216 21:21:48.825742   69158 system_pods.go:61] "kube-proxy-wmg6p" [42161636-148a-4705-9c4e-12fa9f677c35] Running
	I1216 21:21:48.825747   69158 system_pods.go:61] "kube-scheduler-kindnet-647112" [e8560a55-4105-4a3e-8260-73e29cebe4ad] Running
	I1216 21:21:48.825752   69158 system_pods.go:61] "storage-provisioner" [87a42d28-c340-4b35-8abc-955de1b8bcd3] Running
	I1216 21:21:48.825764   69158 system_pods.go:74] duration metric: took 181.642488ms to wait for pod list to return data ...
	I1216 21:21:48.825776   69158 default_sa.go:34] waiting for default service account to be created ...
	I1216 21:21:49.021183   69158 default_sa.go:45] found service account: "default"
	I1216 21:21:49.021212   69158 default_sa.go:55] duration metric: took 195.430112ms for default service account to be created ...
	I1216 21:21:49.021221   69158 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 21:21:49.224287   69158 system_pods.go:86] 8 kube-system pods found
	I1216 21:21:49.224321   69158 system_pods.go:89] "coredns-668d6bf9bc-swcxw" [51d5e1c1-a13c-4f84-a0a2-1596f602c617] Running
	I1216 21:21:49.224329   69158 system_pods.go:89] "etcd-kindnet-647112" [e7c0a41e-8aea-4f8b-9a74-2adfaff6d7f0] Running
	I1216 21:21:49.224335   69158 system_pods.go:89] "kindnet-sq6fd" [5e9c8805-ef32-4a3c-b8bd-cb208ebf3f92] Running
	I1216 21:21:49.224341   69158 system_pods.go:89] "kube-apiserver-kindnet-647112" [2d1417e8-c2c3-43fd-ba01-9e9efbbfbdc7] Running
	I1216 21:21:49.224346   69158 system_pods.go:89] "kube-controller-manager-kindnet-647112" [2b45fe8f-b529-4948-859b-7bb39e23cdb3] Running
	I1216 21:21:49.224350   69158 system_pods.go:89] "kube-proxy-wmg6p" [42161636-148a-4705-9c4e-12fa9f677c35] Running
	I1216 21:21:49.224355   69158 system_pods.go:89] "kube-scheduler-kindnet-647112" [e8560a55-4105-4a3e-8260-73e29cebe4ad] Running
	I1216 21:21:49.224360   69158 system_pods.go:89] "storage-provisioner" [87a42d28-c340-4b35-8abc-955de1b8bcd3] Running
	I1216 21:21:49.224370   69158 system_pods.go:126] duration metric: took 203.143158ms to wait for k8s-apps to be running ...
	I1216 21:21:49.224379   69158 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 21:21:49.224430   69158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:21:49.241420   69158 system_svc.go:56] duration metric: took 17.032624ms WaitForService to wait for kubelet
	I1216 21:21:49.241458   69158 kubeadm.go:582] duration metric: took 17.880981232s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 21:21:49.241482   69158 node_conditions.go:102] verifying NodePressure condition ...
	I1216 21:21:49.421818   69158 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 21:21:49.421847   69158 node_conditions.go:123] node cpu capacity is 2
	I1216 21:21:49.421858   69158 node_conditions.go:105] duration metric: took 180.371477ms to run NodePressure ...
	I1216 21:21:49.421869   69158 start.go:241] waiting for startup goroutines ...
	I1216 21:21:49.421875   69158 start.go:246] waiting for cluster config update ...
	I1216 21:21:49.421885   69158 start.go:255] writing updated cluster config ...
	I1216 21:21:49.422128   69158 ssh_runner.go:195] Run: rm -f paused
	I1216 21:21:49.474193   69158 start.go:600] kubectl: 1.32.0, cluster: 1.32.0 (minor skew: 0)
	I1216 21:21:49.476471   69158 out.go:177] * Done! kubectl is now configured to use "kindnet-647112" cluster and "default" namespace by default
	I1216 21:21:46.467282   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | domain custom-flannel-647112 has defined MAC address 52:54:00:1c:4d:2a in network mk-custom-flannel-647112
	I1216 21:21:46.467808   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | unable to find current IP address of domain custom-flannel-647112 in network mk-custom-flannel-647112
	I1216 21:21:46.467839   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | I1216 21:21:46.467754   71530 retry.go:31] will retry after 1.038171124s: waiting for machine to come up
	I1216 21:21:47.507964   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | domain custom-flannel-647112 has defined MAC address 52:54:00:1c:4d:2a in network mk-custom-flannel-647112
	I1216 21:21:47.508435   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | unable to find current IP address of domain custom-flannel-647112 in network mk-custom-flannel-647112
	I1216 21:21:47.508464   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | I1216 21:21:47.508389   71530 retry.go:31] will retry after 1.290454063s: waiting for machine to come up
	I1216 21:21:48.800779   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | domain custom-flannel-647112 has defined MAC address 52:54:00:1c:4d:2a in network mk-custom-flannel-647112
	I1216 21:21:48.801306   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | unable to find current IP address of domain custom-flannel-647112 in network mk-custom-flannel-647112
	I1216 21:21:48.801333   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | I1216 21:21:48.801247   71530 retry.go:31] will retry after 1.140142743s: waiting for machine to come up
	I1216 21:21:49.943704   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | domain custom-flannel-647112 has defined MAC address 52:54:00:1c:4d:2a in network mk-custom-flannel-647112
	I1216 21:21:49.944194   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | unable to find current IP address of domain custom-flannel-647112 in network mk-custom-flannel-647112
	I1216 21:21:49.944228   71506 main.go:141] libmachine: (custom-flannel-647112) DBG | I1216 21:21:49.944144   71530 retry.go:31] will retry after 1.673436043s: waiting for machine to come up
	
	
	==> CRI-O <==
	Dec 16 21:21:52 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:21:52.893503151Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734384112893467700,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=20ef42c0-1c43-46f8-a9ec-6a6219678ad2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:21:52 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:21:52.894336061Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=005cf791-608a-4b3d-beb7-ac4d106bb768 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:21:52 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:21:52.894449518Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=005cf791-608a-4b3d-beb7-ac4d106bb768 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:21:52 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:21:52.894935276Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d0826a03ee32bda77ca97335013ae91f002f774efa4f77d0b0a3c75ab0f2fae,PodSandboxId:f96f4d5fc11834abc33de5566a9cef8bbd6a6e647645ce19a6dbf662504eb3f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734383088143458605,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e5b12f0-3d96-4dd0-81e7-300b82058d47,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b883d30dedb7ac8cf7ed4d5fbe42cf9c380af2fa2adc7837f5e1eb4f4286d56,PodSandboxId:87210bde75f5e5f2f8c0c6774deec425ad3f89c9206f0585eda832131db80ccc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383087775555413,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-2qcfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ac98efa-96ff-4564-93de-4a61de7a6507,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cbea0505ae515737f507676651e8308dd31354d4e2983604c7600ec4b698315,PodSandboxId:95cded2582e335ced9145c41ce9e157aef56a4e828f6c394a7ec52824df90347,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383087736159835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-fb7wx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: f2f2c0e7-893f-45ba-8da9-3b03f5560d89,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8024d16c768a1a12c74b2f6ef94acf5f68515049b2d934b644d99c6b2b9402ba,PodSandboxId:34cfaeb4337fc75cbee3a6fa49712b4ca1d8e595e989ef858c0cf60b220aae69,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING
,CreatedAt:1734383087535323304,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-njqp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5f1789d-b343-4c2e-b078-4a15f4b18569,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c389817bb05dee5083f0f85846c3e9cccf18b201795c52310482918e60e25df,PodSandboxId:1eb78ce645b2571eb5ade481309d52b02dbb60e7711b946f1e5c6e3986d92840,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1734383076271345315,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-327790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4161c8d3e913d5fb25c2915868fcc95f,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64cd514bcb576b70d0cc71be17b490af4580719763a31c11db97a3606c6a43f7,PodSandboxId:996c6adcce57d835833611f5660c05364fd46dde8d3597fd09fd1f56248554d1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1734383076151220888,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-327790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 540eb587b53eee7d2fdff2e59d720161,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6ba32c1db82f60d989fa33475fbc9acb149b3f07c73c7a5ef49e78ea656bd5f,PodSandboxId:3a803b97c066c51c4b72c4173806d11746ac47dc79cfdd01bc7aeab04d6d1db8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1734383076196769230,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-327790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638856e81522f218edf3c9f433e2fb12,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e575119c7ed0e6e978688998860bb47314a279b21cba2a1376c00f7be1f8d93,PodSandboxId:eff6ffd5c55a090d444b9d767f2456b349970302a3b48a211d3552511c1e2835,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1734383076132527737,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-327790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 041e70638e8ff674d941b5a1fa24cadc,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:404f75e4f0e84be27c459ae7b16952b6e5ca8cf8aacc77237c4bb2a68a91a662,PodSandboxId:9214074bc484ad26259b24c2cdb17ba7618684e2d71e0e316304ad2dce41a57f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1734382785972660069,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-327790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638856e81522f218edf3c9f433e2fb12,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=005cf791-608a-4b3d-beb7-ac4d106bb768 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:21:52 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:21:52.960328265Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=249991c7-f458-4bf9-aae4-58b1f80149be name=/runtime.v1.RuntimeService/Version
	Dec 16 21:21:52 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:21:52.960470812Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=249991c7-f458-4bf9-aae4-58b1f80149be name=/runtime.v1.RuntimeService/Version
	Dec 16 21:21:52 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:21:52.962769183Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ddf23411-8532-40e1-9ba6-9e728d8e910e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:21:52 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:21:52.963342963Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734384112963309776,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ddf23411-8532-40e1-9ba6-9e728d8e910e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:21:52 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:21:52.964263223Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=58073c53-315d-409d-b93c-fbc8a1c9b2f4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:21:52 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:21:52.964370892Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=58073c53-315d-409d-b93c-fbc8a1c9b2f4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:21:52 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:21:52.964793227Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d0826a03ee32bda77ca97335013ae91f002f774efa4f77d0b0a3c75ab0f2fae,PodSandboxId:f96f4d5fc11834abc33de5566a9cef8bbd6a6e647645ce19a6dbf662504eb3f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734383088143458605,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e5b12f0-3d96-4dd0-81e7-300b82058d47,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b883d30dedb7ac8cf7ed4d5fbe42cf9c380af2fa2adc7837f5e1eb4f4286d56,PodSandboxId:87210bde75f5e5f2f8c0c6774deec425ad3f89c9206f0585eda832131db80ccc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383087775555413,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-2qcfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ac98efa-96ff-4564-93de-4a61de7a6507,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cbea0505ae515737f507676651e8308dd31354d4e2983604c7600ec4b698315,PodSandboxId:95cded2582e335ced9145c41ce9e157aef56a4e828f6c394a7ec52824df90347,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383087736159835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-fb7wx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: f2f2c0e7-893f-45ba-8da9-3b03f5560d89,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8024d16c768a1a12c74b2f6ef94acf5f68515049b2d934b644d99c6b2b9402ba,PodSandboxId:34cfaeb4337fc75cbee3a6fa49712b4ca1d8e595e989ef858c0cf60b220aae69,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING
,CreatedAt:1734383087535323304,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-njqp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5f1789d-b343-4c2e-b078-4a15f4b18569,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c389817bb05dee5083f0f85846c3e9cccf18b201795c52310482918e60e25df,PodSandboxId:1eb78ce645b2571eb5ade481309d52b02dbb60e7711b946f1e5c6e3986d92840,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1734383076271345315,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-327790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4161c8d3e913d5fb25c2915868fcc95f,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64cd514bcb576b70d0cc71be17b490af4580719763a31c11db97a3606c6a43f7,PodSandboxId:996c6adcce57d835833611f5660c05364fd46dde8d3597fd09fd1f56248554d1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1734383076151220888,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-327790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 540eb587b53eee7d2fdff2e59d720161,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6ba32c1db82f60d989fa33475fbc9acb149b3f07c73c7a5ef49e78ea656bd5f,PodSandboxId:3a803b97c066c51c4b72c4173806d11746ac47dc79cfdd01bc7aeab04d6d1db8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1734383076196769230,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-327790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638856e81522f218edf3c9f433e2fb12,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e575119c7ed0e6e978688998860bb47314a279b21cba2a1376c00f7be1f8d93,PodSandboxId:eff6ffd5c55a090d444b9d767f2456b349970302a3b48a211d3552511c1e2835,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1734383076132527737,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-327790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 041e70638e8ff674d941b5a1fa24cadc,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:404f75e4f0e84be27c459ae7b16952b6e5ca8cf8aacc77237c4bb2a68a91a662,PodSandboxId:9214074bc484ad26259b24c2cdb17ba7618684e2d71e0e316304ad2dce41a57f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1734382785972660069,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-327790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638856e81522f218edf3c9f433e2fb12,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=58073c53-315d-409d-b93c-fbc8a1c9b2f4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:21:53 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:21:53.021678672Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e75c0ee4-191f-4772-be91-249286888a9e name=/runtime.v1.RuntimeService/Version
	Dec 16 21:21:53 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:21:53.021783019Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e75c0ee4-191f-4772-be91-249286888a9e name=/runtime.v1.RuntimeService/Version
	Dec 16 21:21:53 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:21:53.023169557Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0c1f3a63-5d3e-45c2-a1d8-35c8e27f2ed6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:21:53 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:21:53.023815805Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734384113023788616,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0c1f3a63-5d3e-45c2-a1d8-35c8e27f2ed6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:21:53 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:21:53.024551410Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0433a3f8-65ad-41b6-a949-1edc05029a33 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:21:53 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:21:53.025020614Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0433a3f8-65ad-41b6-a949-1edc05029a33 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:21:53 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:21:53.025874067Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d0826a03ee32bda77ca97335013ae91f002f774efa4f77d0b0a3c75ab0f2fae,PodSandboxId:f96f4d5fc11834abc33de5566a9cef8bbd6a6e647645ce19a6dbf662504eb3f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734383088143458605,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e5b12f0-3d96-4dd0-81e7-300b82058d47,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b883d30dedb7ac8cf7ed4d5fbe42cf9c380af2fa2adc7837f5e1eb4f4286d56,PodSandboxId:87210bde75f5e5f2f8c0c6774deec425ad3f89c9206f0585eda832131db80ccc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383087775555413,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-2qcfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ac98efa-96ff-4564-93de-4a61de7a6507,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cbea0505ae515737f507676651e8308dd31354d4e2983604c7600ec4b698315,PodSandboxId:95cded2582e335ced9145c41ce9e157aef56a4e828f6c394a7ec52824df90347,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383087736159835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-fb7wx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: f2f2c0e7-893f-45ba-8da9-3b03f5560d89,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8024d16c768a1a12c74b2f6ef94acf5f68515049b2d934b644d99c6b2b9402ba,PodSandboxId:34cfaeb4337fc75cbee3a6fa49712b4ca1d8e595e989ef858c0cf60b220aae69,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING
,CreatedAt:1734383087535323304,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-njqp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5f1789d-b343-4c2e-b078-4a15f4b18569,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c389817bb05dee5083f0f85846c3e9cccf18b201795c52310482918e60e25df,PodSandboxId:1eb78ce645b2571eb5ade481309d52b02dbb60e7711b946f1e5c6e3986d92840,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1734383076271345315,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-327790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4161c8d3e913d5fb25c2915868fcc95f,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64cd514bcb576b70d0cc71be17b490af4580719763a31c11db97a3606c6a43f7,PodSandboxId:996c6adcce57d835833611f5660c05364fd46dde8d3597fd09fd1f56248554d1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1734383076151220888,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-327790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 540eb587b53eee7d2fdff2e59d720161,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6ba32c1db82f60d989fa33475fbc9acb149b3f07c73c7a5ef49e78ea656bd5f,PodSandboxId:3a803b97c066c51c4b72c4173806d11746ac47dc79cfdd01bc7aeab04d6d1db8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1734383076196769230,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-327790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638856e81522f218edf3c9f433e2fb12,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e575119c7ed0e6e978688998860bb47314a279b21cba2a1376c00f7be1f8d93,PodSandboxId:eff6ffd5c55a090d444b9d767f2456b349970302a3b48a211d3552511c1e2835,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1734383076132527737,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-327790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 041e70638e8ff674d941b5a1fa24cadc,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:404f75e4f0e84be27c459ae7b16952b6e5ca8cf8aacc77237c4bb2a68a91a662,PodSandboxId:9214074bc484ad26259b24c2cdb17ba7618684e2d71e0e316304ad2dce41a57f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1734382785972660069,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-327790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638856e81522f218edf3c9f433e2fb12,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0433a3f8-65ad-41b6-a949-1edc05029a33 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:21:53 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:21:53.082032775Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=076e7b14-41d3-4fa1-bf71-cbde63594b93 name=/runtime.v1.RuntimeService/Version
	Dec 16 21:21:53 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:21:53.082141145Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=076e7b14-41d3-4fa1-bf71-cbde63594b93 name=/runtime.v1.RuntimeService/Version
	Dec 16 21:21:53 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:21:53.083486217Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2f0c4ff6-da4f-44c1-98ac-a2855e7b2425 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:21:53 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:21:53.084365726Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734384113084325254,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2f0c4ff6-da4f-44c1-98ac-a2855e7b2425 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:21:53 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:21:53.085524923Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2c789169-764d-48c0-b03c-5ac364dfd5e2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:21:53 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:21:53.085669279Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2c789169-764d-48c0-b03c-5ac364dfd5e2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:21:53 default-k8s-diff-port-327790 crio[725]: time="2024-12-16 21:21:53.086102429Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4d0826a03ee32bda77ca97335013ae91f002f774efa4f77d0b0a3c75ab0f2fae,PodSandboxId:f96f4d5fc11834abc33de5566a9cef8bbd6a6e647645ce19a6dbf662504eb3f7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734383088143458605,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e5b12f0-3d96-4dd0-81e7-300b82058d47,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b883d30dedb7ac8cf7ed4d5fbe42cf9c380af2fa2adc7837f5e1eb4f4286d56,PodSandboxId:87210bde75f5e5f2f8c0c6774deec425ad3f89c9206f0585eda832131db80ccc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383087775555413,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-2qcfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ac98efa-96ff-4564-93de-4a61de7a6507,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cbea0505ae515737f507676651e8308dd31354d4e2983604c7600ec4b698315,PodSandboxId:95cded2582e335ced9145c41ce9e157aef56a4e828f6c394a7ec52824df90347,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383087736159835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-fb7wx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: f2f2c0e7-893f-45ba-8da9-3b03f5560d89,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8024d16c768a1a12c74b2f6ef94acf5f68515049b2d934b644d99c6b2b9402ba,PodSandboxId:34cfaeb4337fc75cbee3a6fa49712b4ca1d8e595e989ef858c0cf60b220aae69,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING
,CreatedAt:1734383087535323304,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-njqp8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5f1789d-b343-4c2e-b078-4a15f4b18569,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c389817bb05dee5083f0f85846c3e9cccf18b201795c52310482918e60e25df,PodSandboxId:1eb78ce645b2571eb5ade481309d52b02dbb60e7711b946f1e5c6e3986d92840,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1734383076271345315,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-327790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4161c8d3e913d5fb25c2915868fcc95f,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64cd514bcb576b70d0cc71be17b490af4580719763a31c11db97a3606c6a43f7,PodSandboxId:996c6adcce57d835833611f5660c05364fd46dde8d3597fd09fd1f56248554d1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1734383076151220888,Labels:map[string]string{io.ku
bernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-327790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 540eb587b53eee7d2fdff2e59d720161,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6ba32c1db82f60d989fa33475fbc9acb149b3f07c73c7a5ef49e78ea656bd5f,PodSandboxId:3a803b97c066c51c4b72c4173806d11746ac47dc79cfdd01bc7aeab04d6d1db8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1734383076196769230,Labels:map[string]string{io.kube
rnetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-327790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638856e81522f218edf3c9f433e2fb12,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e575119c7ed0e6e978688998860bb47314a279b21cba2a1376c00f7be1f8d93,PodSandboxId:eff6ffd5c55a090d444b9d767f2456b349970302a3b48a211d3552511c1e2835,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1734383076132527737,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-327790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 041e70638e8ff674d941b5a1fa24cadc,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:404f75e4f0e84be27c459ae7b16952b6e5ca8cf8aacc77237c4bb2a68a91a662,PodSandboxId:9214074bc484ad26259b24c2cdb17ba7618684e2d71e0e316304ad2dce41a57f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1734382785972660069,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-327790,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 638856e81522f218edf3c9f433e2fb12,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2c789169-764d-48c0-b03c-5ac364dfd5e2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4d0826a03ee32       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   17 minutes ago      Running             storage-provisioner       0                   f96f4d5fc1183       storage-provisioner
	8b883d30dedb7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   17 minutes ago      Running             coredns                   0                   87210bde75f5e       coredns-668d6bf9bc-2qcfx
	1cbea0505ae51       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   17 minutes ago      Running             coredns                   0                   95cded2582e33       coredns-668d6bf9bc-fb7wx
	8024d16c768a1       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   17 minutes ago      Running             kube-proxy                0                   34cfaeb4337fc       kube-proxy-njqp8
	7c389817bb05d       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   17 minutes ago      Running             etcd                      2                   1eb78ce645b25       etcd-default-k8s-diff-port-327790
	f6ba32c1db82f       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   17 minutes ago      Running             kube-apiserver            2                   3a803b97c066c       kube-apiserver-default-k8s-diff-port-327790
	64cd514bcb576       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   17 minutes ago      Running             kube-scheduler            2                   996c6adcce57d       kube-scheduler-default-k8s-diff-port-327790
	0e575119c7ed0       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   17 minutes ago      Running             kube-controller-manager   2                   eff6ffd5c55a0       kube-controller-manager-default-k8s-diff-port-327790
	404f75e4f0e84       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   22 minutes ago      Exited              kube-apiserver            1                   9214074bc484a       kube-apiserver-default-k8s-diff-port-327790
	
	
	==> coredns [1cbea0505ae515737f507676651e8308dd31354d4e2983604c7600ec4b698315] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [8b883d30dedb7ac8cf7ed4d5fbe42cf9c380af2fa2adc7837f5e1eb4f4286d56] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-327790
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-327790
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=74e51ab701402ddc00f8ba70f2a2775c7dcd6477
	                    minikube.k8s.io/name=default-k8s-diff-port-327790
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_16T21_04_41_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Dec 2024 21:04:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-327790
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Dec 2024 21:21:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Dec 2024 21:17:58 +0000   Mon, 16 Dec 2024 21:04:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Dec 2024 21:17:58 +0000   Mon, 16 Dec 2024 21:04:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Dec 2024 21:17:58 +0000   Mon, 16 Dec 2024 21:04:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Dec 2024 21:17:58 +0000   Mon, 16 Dec 2024 21:04:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.162
	  Hostname:    default-k8s-diff-port-327790
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5c304e91f28e48498b23e62d0abccc28
	  System UUID:                5c304e91-f28e-4849-8b23-e62d0abccc28
	  Boot ID:                    d8fa6d28-1be2-4bf3-9cb6-2881b7c2f2fd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-2qcfx                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     17m
	  kube-system                 coredns-668d6bf9bc-fb7wx                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     17m
	  kube-system                 etcd-default-k8s-diff-port-327790                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         17m
	  kube-system                 kube-apiserver-default-k8s-diff-port-327790             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-327790    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-njqp8                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-default-k8s-diff-port-327790             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 metrics-server-f79f97bbb-84xtf                          100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         17m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 17m   kube-proxy       
	  Normal  Starting                 17m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  17m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17m   kubelet          Node default-k8s-diff-port-327790 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m   kubelet          Node default-k8s-diff-port-327790 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m   kubelet          Node default-k8s-diff-port-327790 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           17m   node-controller  Node default-k8s-diff-port-327790 event: Registered Node default-k8s-diff-port-327790 in Controller
	
	
	==> dmesg <==
	[  +0.053113] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041934] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.050363] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.945888] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.637075] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.063997] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.058153] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059905] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.209206] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +0.170050] systemd-fstab-generator[687]: Ignoring "noauto" option for root device
	[  +0.387564] systemd-fstab-generator[716]: Ignoring "noauto" option for root device
	[  +4.689490] systemd-fstab-generator[808]: Ignoring "noauto" option for root device
	[  +0.061730] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.754898] systemd-fstab-generator[931]: Ignoring "noauto" option for root device
	[  +5.611970] kauditd_printk_skb: 97 callbacks suppressed
	[  +9.327600] kauditd_printk_skb: 90 callbacks suppressed
	[Dec16 21:04] kauditd_printk_skb: 4 callbacks suppressed
	[ +12.902544] systemd-fstab-generator[2716]: Ignoring "noauto" option for root device
	[  +4.661172] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.395475] systemd-fstab-generator[3049]: Ignoring "noauto" option for root device
	[  +4.894467] systemd-fstab-generator[3161]: Ignoring "noauto" option for root device
	[  +0.092903] kauditd_printk_skb: 14 callbacks suppressed
	[  +7.323443] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [7c389817bb05dee5083f0f85846c3e9cccf18b201795c52310482918e60e25df] <==
	{"level":"warn","ts":"2024-12-16T21:20:10.818164Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-16T21:20:10.258058Z","time spent":"560.099859ms","remote":"127.0.0.1:47250","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-12-16T21:20:10.818355Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"240.657858ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.162\" limit:1 ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-12-16T21:20:10.818394Z","caller":"traceutil/trace.go:171","msg":"trace[1856192755] range","detail":"{range_begin:/registry/masterleases/192.168.39.162; range_end:; response_count:1; response_revision:1223; }","duration":"240.717153ms","start":"2024-12-16T21:20:10.577670Z","end":"2024-12-16T21:20:10.818387Z","steps":["trace[1856192755] 'agreement among raft nodes before linearized reading'  (duration: 240.625081ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T21:20:10.818496Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"470.130118ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests/\" range_end:\"/registry/certificatesigningrequests0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-12-16T21:20:10.818532Z","caller":"traceutil/trace.go:171","msg":"trace[718732] range","detail":"{range_begin:/registry/certificatesigningrequests/; range_end:/registry/certificatesigningrequests0; response_count:0; response_revision:1223; }","duration":"470.200431ms","start":"2024-12-16T21:20:10.348324Z","end":"2024-12-16T21:20:10.818525Z","steps":["trace[718732] 'agreement among raft nodes before linearized reading'  (duration: 470.144108ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T21:20:10.818548Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-16T21:20:10.348304Z","time spent":"470.238932ms","remote":"127.0.0.1:47484","response type":"/etcdserverpb.KV/Range","request count":0,"request size":80,"response count":1,"response size":31,"request content":"key:\"/registry/certificatesigningrequests/\" range_end:\"/registry/certificatesigningrequests0\" count_only:true "}
	{"level":"warn","ts":"2024-12-16T21:20:11.159307Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"207.693049ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12472318745140111736 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:2d1693d14985a977>","response":"size:41"}
	{"level":"info","ts":"2024-12-16T21:20:11.159436Z","caller":"traceutil/trace.go:171","msg":"trace[1695872368] linearizableReadLoop","detail":"{readStateIndex:1426; appliedIndex:1425; }","duration":"282.261058ms","start":"2024-12-16T21:20:10.877160Z","end":"2024-12-16T21:20:11.159421Z","steps":["trace[1695872368] 'read index received'  (duration: 74.277264ms)","trace[1695872368] 'applied index is now lower than readState.Index'  (duration: 207.982638ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-16T21:20:11.159547Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.833292ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-16T21:20:11.159649Z","caller":"traceutil/trace.go:171","msg":"trace[1268263736] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1223; }","duration":"100.970273ms","start":"2024-12-16T21:20:11.058668Z","end":"2024-12-16T21:20:11.159639Z","steps":["trace[1268263736] 'agreement among raft nodes before linearized reading'  (duration: 100.841837ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T21:20:11.159560Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"282.387028ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-16T21:20:11.159789Z","caller":"traceutil/trace.go:171","msg":"trace[1438362981] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1223; }","duration":"282.621051ms","start":"2024-12-16T21:20:10.877154Z","end":"2024-12-16T21:20:11.159775Z","steps":["trace[1438362981] 'agreement among raft nodes before linearized reading'  (duration: 282.351197ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T21:20:11.159954Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-16T21:20:10.821797Z","time spent":"338.154649ms","remote":"127.0.0.1:47258","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2024-12-16T21:20:50.630778Z","caller":"traceutil/trace.go:171","msg":"trace[1995436214] transaction","detail":"{read_only:false; response_revision:1254; number_of_response:1; }","duration":"122.769889ms","start":"2024-12-16T21:20:50.507927Z","end":"2024-12-16T21:20:50.630697Z","steps":["trace[1995436214] 'process raft request'  (duration: 122.207568ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T21:20:50.888089Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.087508ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12472318745140111968 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:2d1693d14985aa5f>","response":"size:41"}
	{"level":"info","ts":"2024-12-16T21:20:51.004832Z","caller":"traceutil/trace.go:171","msg":"trace[1290533811] transaction","detail":"{read_only:false; response_revision:1255; number_of_response:1; }","duration":"114.471623ms","start":"2024-12-16T21:20:50.890333Z","end":"2024-12-16T21:20:51.004805Z","steps":["trace[1290533811] 'process raft request'  (duration: 112.999863ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T21:21:15.989047Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.550925ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-16T21:21:15.989191Z","caller":"traceutil/trace.go:171","msg":"trace[1782652] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1275; }","duration":"112.714554ms","start":"2024-12-16T21:21:15.876465Z","end":"2024-12-16T21:21:15.989179Z","steps":["trace[1782652] 'range keys from in-memory index tree'  (duration: 112.538931ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T21:21:15.989029Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.986351ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12472318745140112117 > lease_revoke:<id:2d1693d14985aa97>","response":"size:29"}
	{"level":"warn","ts":"2024-12-16T21:21:46.081856Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"246.649102ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12472318745140112288 > lease_revoke:<id:2d1693d14985ab45>","response":"size:29"}
	{"level":"warn","ts":"2024-12-16T21:21:46.082103Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.833595ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-16T21:21:46.082177Z","caller":"traceutil/trace.go:171","msg":"trace[644820698] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1300; }","duration":"204.915862ms","start":"2024-12-16T21:21:45.877247Z","end":"2024-12-16T21:21:46.082163Z","steps":["trace[644820698] 'range keys from in-memory index tree'  (duration: 204.821471ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-16T21:21:46.082101Z","caller":"traceutil/trace.go:171","msg":"trace[1111226230] linearizableReadLoop","detail":"{readStateIndex:1522; appliedIndex:1521; }","duration":"142.943453ms","start":"2024-12-16T21:21:45.939135Z","end":"2024-12-16T21:21:46.082078Z","steps":["trace[1111226230] 'read index received'  (duration: 23.096µs)","trace[1111226230] 'applied index is now lower than readState.Index'  (duration: 142.919078ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-16T21:21:46.083746Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.59728ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-16T21:21:46.083896Z","caller":"traceutil/trace.go:171","msg":"trace[1421338157] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1300; }","duration":"144.774074ms","start":"2024-12-16T21:21:45.939110Z","end":"2024-12-16T21:21:46.083884Z","steps":["trace[1421338157] 'agreement among raft nodes before linearized reading'  (duration: 143.022659ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:21:53 up 22 min,  0 users,  load average: 0.98, 0.38, 0.20
	Linux default-k8s-diff-port-327790 5.10.207 #1 SMP Thu Dec 12 23:38:00 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [404f75e4f0e84be27c459ae7b16952b6e5ca8cf8aacc77237c4bb2a68a91a662] <==
	W1216 21:04:31.963667       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.031736       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.061777       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.083171       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.092861       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.178706       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.186308       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.244924       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.253523       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.264310       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.303837       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.317695       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.356961       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.426977       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.433397       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.505645       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.506922       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.548087       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.548096       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.669498       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.675268       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.749855       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.892492       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:32.990922       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:33.037913       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [f6ba32c1db82f60d989fa33475fbc9acb149b3f07c73c7a5ef49e78ea656bd5f] <==
	E1216 21:17:39.678801       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1216 21:17:39.680123       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1216 21:19:38.674155       1 handler_proxy.go:99] no RequestInfo found in the context
	E1216 21:19:38.674316       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1216 21:19:39.676288       1 handler_proxy.go:99] no RequestInfo found in the context
	E1216 21:19:39.676457       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1216 21:19:39.676339       1 handler_proxy.go:99] no RequestInfo found in the context
	E1216 21:19:39.676683       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1216 21:19:39.677816       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1216 21:19:39.677871       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1216 21:20:39.678746       1 handler_proxy.go:99] no RequestInfo found in the context
	E1216 21:20:39.679001       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1216 21:20:39.678966       1 handler_proxy.go:99] no RequestInfo found in the context
	E1216 21:20:39.679118       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1216 21:20:39.680299       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1216 21:20:39.680422       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [0e575119c7ed0e6e978688998860bb47314a279b21cba2a1376c00f7be1f8d93] <==
	E1216 21:16:45.435802       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:16:45.503809       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:17:15.442715       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:17:15.512417       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:17:45.449923       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:17:45.523224       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1216 21:17:58.536205       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-327790"
	E1216 21:18:15.457320       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:18:15.532296       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:18:45.463669       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:18:45.540790       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:19:15.471987       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:19:15.553203       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:19:45.480798       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:19:45.562322       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:20:15.487528       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:20:15.572313       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:20:45.497139       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:20:45.583491       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1216 21:21:03.291399       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="434.463µs"
	E1216 21:21:15.504532       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:21:15.596225       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1216 21:21:17.287361       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="53.367µs"
	E1216 21:21:45.511341       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:21:45.608846       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [8024d16c768a1a12c74b2f6ef94acf5f68515049b2d934b644d99c6b2b9402ba] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1216 21:04:48.267337       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1216 21:04:48.295520       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.162"]
	E1216 21:04:48.295763       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 21:04:48.367248       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1216 21:04:48.367341       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1216 21:04:48.367379       1 server_linux.go:170] "Using iptables Proxier"
	I1216 21:04:48.370174       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 21:04:48.370555       1 server.go:497] "Version info" version="v1.32.0"
	I1216 21:04:48.370811       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 21:04:48.372421       1 config.go:199] "Starting service config controller"
	I1216 21:04:48.372867       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1216 21:04:48.373097       1 config.go:105] "Starting endpoint slice config controller"
	I1216 21:04:48.373132       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1216 21:04:48.373773       1 config.go:329] "Starting node config controller"
	I1216 21:04:48.373810       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1216 21:04:48.473727       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1216 21:04:48.474625       1 shared_informer.go:320] Caches are synced for service config
	I1216 21:04:48.474645       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [64cd514bcb576b70d0cc71be17b490af4580719763a31c11db97a3606c6a43f7] <==
	W1216 21:04:38.724310       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1216 21:04:38.724340       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 21:04:39.646332       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1216 21:04:39.646385       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1216 21:04:39.692848       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1216 21:04:39.692959       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 21:04:39.738198       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1216 21:04:39.738334       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 21:04:39.840361       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1216 21:04:39.840430       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 21:04:39.849893       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E1216 21:04:39.849948       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 21:04:39.983762       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1216 21:04:39.983888       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 21:04:40.025254       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1216 21:04:40.025290       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1216 21:04:40.059992       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1216 21:04:40.060059       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 21:04:40.110696       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1216 21:04:40.110749       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 21:04:40.117840       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1216 21:04:40.117874       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1216 21:04:40.122553       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1216 21:04:40.122647       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1216 21:04:42.611304       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 16 21:20:48 default-k8s-diff-port-327790 kubelet[3056]: E1216 21:20:48.308685    3056 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-84xtf" podUID="569c6717-dc12-474f-8156-d2dd9e410a54"
	Dec 16 21:20:51 default-k8s-diff-port-327790 kubelet[3056]: E1216 21:20:51.629797    3056 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734384051629210216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:20:51 default-k8s-diff-port-327790 kubelet[3056]: E1216 21:20:51.629846    3056 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734384051629210216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:21:01 default-k8s-diff-port-327790 kubelet[3056]: E1216 21:21:01.632108    3056 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734384061631507196,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:21:01 default-k8s-diff-port-327790 kubelet[3056]: E1216 21:21:01.632700    3056 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734384061631507196,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:21:03 default-k8s-diff-port-327790 kubelet[3056]: E1216 21:21:03.265557    3056 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-84xtf" podUID="569c6717-dc12-474f-8156-d2dd9e410a54"
	Dec 16 21:21:11 default-k8s-diff-port-327790 kubelet[3056]: E1216 21:21:11.635893    3056 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734384071635107953,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:21:11 default-k8s-diff-port-327790 kubelet[3056]: E1216 21:21:11.635960    3056 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734384071635107953,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:21:17 default-k8s-diff-port-327790 kubelet[3056]: E1216 21:21:17.269935    3056 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-84xtf" podUID="569c6717-dc12-474f-8156-d2dd9e410a54"
	Dec 16 21:21:21 default-k8s-diff-port-327790 kubelet[3056]: E1216 21:21:21.638985    3056 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734384081638242613,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:21:21 default-k8s-diff-port-327790 kubelet[3056]: E1216 21:21:21.639498    3056 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734384081638242613,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:21:28 default-k8s-diff-port-327790 kubelet[3056]: E1216 21:21:28.266314    3056 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-84xtf" podUID="569c6717-dc12-474f-8156-d2dd9e410a54"
	Dec 16 21:21:31 default-k8s-diff-port-327790 kubelet[3056]: E1216 21:21:31.653910    3056 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734384091649229639,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:21:31 default-k8s-diff-port-327790 kubelet[3056]: E1216 21:21:31.654147    3056 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734384091649229639,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:21:40 default-k8s-diff-port-327790 kubelet[3056]: E1216 21:21:40.265721    3056 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-84xtf" podUID="569c6717-dc12-474f-8156-d2dd9e410a54"
	Dec 16 21:21:41 default-k8s-diff-port-327790 kubelet[3056]: E1216 21:21:41.287234    3056 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 16 21:21:41 default-k8s-diff-port-327790 kubelet[3056]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 16 21:21:41 default-k8s-diff-port-327790 kubelet[3056]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 16 21:21:41 default-k8s-diff-port-327790 kubelet[3056]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 16 21:21:41 default-k8s-diff-port-327790 kubelet[3056]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 16 21:21:41 default-k8s-diff-port-327790 kubelet[3056]: E1216 21:21:41.656132    3056 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734384101655460601,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:21:41 default-k8s-diff-port-327790 kubelet[3056]: E1216 21:21:41.656193    3056 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734384101655460601,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:21:51 default-k8s-diff-port-327790 kubelet[3056]: E1216 21:21:51.265883    3056 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-84xtf" podUID="569c6717-dc12-474f-8156-d2dd9e410a54"
	Dec 16 21:21:51 default-k8s-diff-port-327790 kubelet[3056]: E1216 21:21:51.659673    3056 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734384111658565422,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:21:51 default-k8s-diff-port-327790 kubelet[3056]: E1216 21:21:51.659945    3056 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734384111658565422,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [4d0826a03ee32bda77ca97335013ae91f002f774efa4f77d0b0a3c75ab0f2fae] <==
	I1216 21:04:48.282175       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 21:04:48.302562       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 21:04:48.302894       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1216 21:04:48.317655       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 21:04:48.317838       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-327790_ffff1088-c911-478b-89ff-c07daeb971b7!
	I1216 21:04:48.319492       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4ec7988d-c1d8-4339-ae3a-872a45176971", APIVersion:"v1", ResourceVersion:"389", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-327790_ffff1088-c911-478b-89ff-c07daeb971b7 became leader
	I1216 21:04:48.431336       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-327790_ffff1088-c911-478b-89ff-c07daeb971b7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-327790 -n default-k8s-diff-port-327790
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-327790 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-84xtf
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-327790 describe pod metrics-server-f79f97bbb-84xtf
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-327790 describe pod metrics-server-f79f97bbb-84xtf: exit status 1 (89.956337ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-84xtf" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-327790 describe pod metrics-server-f79f97bbb-84xtf: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (473.43s)
E1216 21:23:28.549788   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/client.crt: no such file or directory" logger="UnhandledError"
E1216 21:23:29.762452   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (364.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-232338 -n no-preload-232338
start_stop_delete_test.go:285: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-12-16 21:20:14.431000033 +0000 UTC m=+6332.904647786
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-232338 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context no-preload-232338 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.807µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-232338 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-232338 -n no-preload-232338
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-232338 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-232338 logs -n 25: (1.446589269s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p embed-certs-606219            | embed-certs-606219           | jenkins | v1.34.0 | 16 Dec 24 20:51 UTC | 16 Dec 24 20:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-606219                                  | embed-certs-606219           | jenkins | v1.34.0 | 16 Dec 24 20:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-270954                              | cert-expiration-270954       | jenkins | v1.34.0 | 16 Dec 24 20:51 UTC | 16 Dec 24 20:51 UTC |
	| delete  | -p                                                     | disable-driver-mounts-384008 | jenkins | v1.34.0 | 16 Dec 24 20:51 UTC | 16 Dec 24 20:51 UTC |
	|         | disable-driver-mounts-384008                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-327790 | jenkins | v1.34.0 | 16 Dec 24 20:51 UTC | 16 Dec 24 20:52 UTC |
	|         | default-k8s-diff-port-327790                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-232338             | no-preload-232338            | jenkins | v1.34.0 | 16 Dec 24 20:52 UTC | 16 Dec 24 20:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-232338                                   | no-preload-232338            | jenkins | v1.34.0 | 16 Dec 24 20:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-327790  | default-k8s-diff-port-327790 | jenkins | v1.34.0 | 16 Dec 24 20:52 UTC | 16 Dec 24 20:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-327790 | jenkins | v1.34.0 | 16 Dec 24 20:52 UTC |                     |
	|         | default-k8s-diff-port-327790                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-847766        | old-k8s-version-847766       | jenkins | v1.34.0 | 16 Dec 24 20:53 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-606219                 | embed-certs-606219           | jenkins | v1.34.0 | 16 Dec 24 20:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-606219                                  | embed-certs-606219           | jenkins | v1.34.0 | 16 Dec 24 20:54 UTC | 16 Dec 24 21:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-232338                  | no-preload-232338            | jenkins | v1.34.0 | 16 Dec 24 20:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-232338                                   | no-preload-232338            | jenkins | v1.34.0 | 16 Dec 24 20:54 UTC | 16 Dec 24 21:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-327790       | default-k8s-diff-port-327790 | jenkins | v1.34.0 | 16 Dec 24 20:55 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-847766                              | old-k8s-version-847766       | jenkins | v1.34.0 | 16 Dec 24 20:55 UTC | 16 Dec 24 20:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-327790 | jenkins | v1.34.0 | 16 Dec 24 20:55 UTC | 16 Dec 24 21:04 UTC |
	|         | default-k8s-diff-port-327790                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-847766             | old-k8s-version-847766       | jenkins | v1.34.0 | 16 Dec 24 20:55 UTC | 16 Dec 24 20:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-847766                              | old-k8s-version-847766       | jenkins | v1.34.0 | 16 Dec 24 20:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-847766                              | old-k8s-version-847766       | jenkins | v1.34.0 | 16 Dec 24 21:18 UTC | 16 Dec 24 21:18 UTC |
	| start   | -p newest-cni-194530 --memory=2200 --alsologtostderr   | newest-cni-194530            | jenkins | v1.34.0 | 16 Dec 24 21:18 UTC | 16 Dec 24 21:19 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-194530             | newest-cni-194530            | jenkins | v1.34.0 | 16 Dec 24 21:19 UTC | 16 Dec 24 21:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-194530                                   | newest-cni-194530            | jenkins | v1.34.0 | 16 Dec 24 21:19 UTC | 16 Dec 24 21:19 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-194530                  | newest-cni-194530            | jenkins | v1.34.0 | 16 Dec 24 21:19 UTC | 16 Dec 24 21:19 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-194530 --memory=2200 --alsologtostderr   | newest-cni-194530            | jenkins | v1.34.0 | 16 Dec 24 21:19 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 21:19:42
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 21:19:42.930884   68111 out.go:345] Setting OutFile to fd 1 ...
	I1216 21:19:42.931007   68111 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 21:19:42.931013   68111 out.go:358] Setting ErrFile to fd 2...
	I1216 21:19:42.931017   68111 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 21:19:42.931189   68111 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-7083/.minikube/bin
	I1216 21:19:42.931718   68111 out.go:352] Setting JSON to false
	I1216 21:19:42.932728   68111 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7328,"bootTime":1734376655,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 21:19:42.932813   68111 start.go:139] virtualization: kvm guest
	I1216 21:19:42.935375   68111 out.go:177] * [newest-cni-194530] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1216 21:19:42.936955   68111 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 21:19:42.936953   68111 notify.go:220] Checking for updates...
	I1216 21:19:42.939586   68111 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 21:19:42.940875   68111 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 21:19:42.942251   68111 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-7083/.minikube
	I1216 21:19:42.943767   68111 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 21:19:42.945773   68111 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 21:19:42.947751   68111 config.go:182] Loaded profile config "newest-cni-194530": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 21:19:42.948239   68111 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:19:42.948312   68111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:19:42.965200   68111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35285
	I1216 21:19:42.965685   68111 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:19:42.966228   68111 main.go:141] libmachine: Using API Version  1
	I1216 21:19:42.966248   68111 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:19:42.966559   68111 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:19:42.966750   68111 main.go:141] libmachine: (newest-cni-194530) Calling .DriverName
	I1216 21:19:42.966984   68111 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 21:19:42.967365   68111 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:19:42.967404   68111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:19:42.983605   68111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37617
	I1216 21:19:42.984009   68111 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:19:42.984582   68111 main.go:141] libmachine: Using API Version  1
	I1216 21:19:42.984604   68111 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:19:42.984949   68111 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:19:42.985165   68111 main.go:141] libmachine: (newest-cni-194530) Calling .DriverName
	I1216 21:19:43.024820   68111 out.go:177] * Using the kvm2 driver based on existing profile
	I1216 21:19:43.026065   68111 start.go:297] selected driver: kvm2
	I1216 21:19:43.026091   68111 start.go:901] validating driver "kvm2" against &{Name:newest-cni-194530 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.32.0 ClusterName:newest-cni-194530 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.84 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s S
cheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 21:19:43.026239   68111 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 21:19:43.027160   68111 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 21:19:43.027265   68111 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20091-7083/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1216 21:19:43.043939   68111 install.go:137] /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1216 21:19:43.044364   68111 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1216 21:19:43.044397   68111 cni.go:84] Creating CNI manager for ""
	I1216 21:19:43.044439   68111 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:19:43.044480   68111 start.go:340] cluster config:
	{Name:newest-cni-194530 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:newest-cni-194530 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.84 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 21:19:43.044581   68111 iso.go:125] acquiring lock: {Name:mk60ed2ba7ed00047edacd09f4f6bf84214f0831 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 21:19:43.047213   68111 out.go:177] * Starting "newest-cni-194530" primary control-plane node in "newest-cni-194530" cluster
	I1216 21:19:43.048483   68111 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 21:19:43.048563   68111 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1216 21:19:43.048575   68111 cache.go:56] Caching tarball of preloaded images
	I1216 21:19:43.048686   68111 preload.go:172] Found /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 21:19:43.048704   68111 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1216 21:19:43.048819   68111 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/newest-cni-194530/config.json ...
	I1216 21:19:43.049095   68111 start.go:360] acquireMachinesLock for newest-cni-194530: {Name:mk014ce1133f8d018fee1f78c9c31a354da6dd77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 21:19:43.049150   68111 start.go:364] duration metric: took 30.616µs to acquireMachinesLock for "newest-cni-194530"
	I1216 21:19:43.049171   68111 start.go:96] Skipping create...Using existing machine configuration
	I1216 21:19:43.049181   68111 fix.go:54] fixHost starting: 
	I1216 21:19:43.049445   68111 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:19:43.049495   68111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:19:43.065198   68111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33373
	I1216 21:19:43.065728   68111 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:19:43.066252   68111 main.go:141] libmachine: Using API Version  1
	I1216 21:19:43.066274   68111 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:19:43.066579   68111 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:19:43.066794   68111 main.go:141] libmachine: (newest-cni-194530) Calling .DriverName
	I1216 21:19:43.066931   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetState
	I1216 21:19:43.068676   68111 fix.go:112] recreateIfNeeded on newest-cni-194530: state=Stopped err=<nil>
	I1216 21:19:43.068700   68111 main.go:141] libmachine: (newest-cni-194530) Calling .DriverName
	W1216 21:19:43.068882   68111 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 21:19:43.070886   68111 out.go:177] * Restarting existing kvm2 VM for "newest-cni-194530" ...
	I1216 21:19:43.072205   68111 main.go:141] libmachine: (newest-cni-194530) Calling .Start
	I1216 21:19:43.072407   68111 main.go:141] libmachine: (newest-cni-194530) Ensuring networks are active...
	I1216 21:19:43.073295   68111 main.go:141] libmachine: (newest-cni-194530) Ensuring network default is active
	I1216 21:19:43.073697   68111 main.go:141] libmachine: (newest-cni-194530) Ensuring network mk-newest-cni-194530 is active
	I1216 21:19:43.074174   68111 main.go:141] libmachine: (newest-cni-194530) Getting domain xml...
	I1216 21:19:43.075126   68111 main.go:141] libmachine: (newest-cni-194530) Creating domain...
	I1216 21:19:44.377877   68111 main.go:141] libmachine: (newest-cni-194530) Waiting to get IP...
	I1216 21:19:44.378855   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:19:44.379456   68111 main.go:141] libmachine: (newest-cni-194530) DBG | unable to find current IP address of domain newest-cni-194530 in network mk-newest-cni-194530
	I1216 21:19:44.379554   68111 main.go:141] libmachine: (newest-cni-194530) DBG | I1216 21:19:44.379426   68148 retry.go:31] will retry after 247.193744ms: waiting for machine to come up
	I1216 21:19:44.628029   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:19:44.628550   68111 main.go:141] libmachine: (newest-cni-194530) DBG | unable to find current IP address of domain newest-cni-194530 in network mk-newest-cni-194530
	I1216 21:19:44.628576   68111 main.go:141] libmachine: (newest-cni-194530) DBG | I1216 21:19:44.628504   68148 retry.go:31] will retry after 326.228896ms: waiting for machine to come up
	I1216 21:19:44.956182   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:19:44.956644   68111 main.go:141] libmachine: (newest-cni-194530) DBG | unable to find current IP address of domain newest-cni-194530 in network mk-newest-cni-194530
	I1216 21:19:44.956676   68111 main.go:141] libmachine: (newest-cni-194530) DBG | I1216 21:19:44.956584   68148 retry.go:31] will retry after 451.712006ms: waiting for machine to come up
	I1216 21:19:45.410320   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:19:45.410701   68111 main.go:141] libmachine: (newest-cni-194530) DBG | unable to find current IP address of domain newest-cni-194530 in network mk-newest-cni-194530
	I1216 21:19:45.410749   68111 main.go:141] libmachine: (newest-cni-194530) DBG | I1216 21:19:45.410685   68148 retry.go:31] will retry after 459.143128ms: waiting for machine to come up
	I1216 21:19:45.871220   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:19:45.871862   68111 main.go:141] libmachine: (newest-cni-194530) DBG | unable to find current IP address of domain newest-cni-194530 in network mk-newest-cni-194530
	I1216 21:19:45.871888   68111 main.go:141] libmachine: (newest-cni-194530) DBG | I1216 21:19:45.871794   68148 retry.go:31] will retry after 679.179533ms: waiting for machine to come up
	I1216 21:19:46.552290   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:19:46.552808   68111 main.go:141] libmachine: (newest-cni-194530) DBG | unable to find current IP address of domain newest-cni-194530 in network mk-newest-cni-194530
	I1216 21:19:46.552834   68111 main.go:141] libmachine: (newest-cni-194530) DBG | I1216 21:19:46.552755   68148 retry.go:31] will retry after 824.957207ms: waiting for machine to come up
	I1216 21:19:47.379195   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:19:47.379671   68111 main.go:141] libmachine: (newest-cni-194530) DBG | unable to find current IP address of domain newest-cni-194530 in network mk-newest-cni-194530
	I1216 21:19:47.379697   68111 main.go:141] libmachine: (newest-cni-194530) DBG | I1216 21:19:47.379642   68148 retry.go:31] will retry after 824.62264ms: waiting for machine to come up
	I1216 21:19:48.206416   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:19:48.206947   68111 main.go:141] libmachine: (newest-cni-194530) DBG | unable to find current IP address of domain newest-cni-194530 in network mk-newest-cni-194530
	I1216 21:19:48.206977   68111 main.go:141] libmachine: (newest-cni-194530) DBG | I1216 21:19:48.206901   68148 retry.go:31] will retry after 1.099364124s: waiting for machine to come up
	I1216 21:19:49.307588   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:19:49.308039   68111 main.go:141] libmachine: (newest-cni-194530) DBG | unable to find current IP address of domain newest-cni-194530 in network mk-newest-cni-194530
	I1216 21:19:49.308072   68111 main.go:141] libmachine: (newest-cni-194530) DBG | I1216 21:19:49.307986   68148 retry.go:31] will retry after 1.73850017s: waiting for machine to come up
	I1216 21:19:51.048936   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:19:51.049468   68111 main.go:141] libmachine: (newest-cni-194530) DBG | unable to find current IP address of domain newest-cni-194530 in network mk-newest-cni-194530
	I1216 21:19:51.049495   68111 main.go:141] libmachine: (newest-cni-194530) DBG | I1216 21:19:51.049420   68148 retry.go:31] will retry after 1.534073716s: waiting for machine to come up
	I1216 21:19:52.584750   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:19:52.585207   68111 main.go:141] libmachine: (newest-cni-194530) DBG | unable to find current IP address of domain newest-cni-194530 in network mk-newest-cni-194530
	I1216 21:19:52.585229   68111 main.go:141] libmachine: (newest-cni-194530) DBG | I1216 21:19:52.585158   68148 retry.go:31] will retry after 2.344648053s: waiting for machine to come up
	I1216 21:19:54.930981   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:19:54.931461   68111 main.go:141] libmachine: (newest-cni-194530) DBG | unable to find current IP address of domain newest-cni-194530 in network mk-newest-cni-194530
	I1216 21:19:54.931488   68111 main.go:141] libmachine: (newest-cni-194530) DBG | I1216 21:19:54.931420   68148 retry.go:31] will retry after 3.08029139s: waiting for machine to come up
	I1216 21:19:58.015791   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:19:58.016196   68111 main.go:141] libmachine: (newest-cni-194530) DBG | unable to find current IP address of domain newest-cni-194530 in network mk-newest-cni-194530
	I1216 21:19:58.016249   68111 main.go:141] libmachine: (newest-cni-194530) DBG | I1216 21:19:58.016142   68148 retry.go:31] will retry after 3.523081374s: waiting for machine to come up
	I1216 21:20:01.540418   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:20:01.540991   68111 main.go:141] libmachine: (newest-cni-194530) Found IP for machine: 192.168.72.84
	I1216 21:20:01.541017   68111 main.go:141] libmachine: (newest-cni-194530) Reserving static IP address...
	I1216 21:20:01.541030   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has current primary IP address 192.168.72.84 and MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:20:01.541611   68111 main.go:141] libmachine: (newest-cni-194530) Reserved static IP address: 192.168.72.84
	I1216 21:20:01.541632   68111 main.go:141] libmachine: (newest-cni-194530) Waiting for SSH to be available...
	I1216 21:20:01.541652   68111 main.go:141] libmachine: (newest-cni-194530) DBG | found host DHCP lease matching {name: "newest-cni-194530", mac: "52:54:00:34:98:24", ip: "192.168.72.84"} in network mk-newest-cni-194530: {Iface:virbr4 ExpiryTime:2024-12-16 22:19:55 +0000 UTC Type:0 Mac:52:54:00:34:98:24 Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:newest-cni-194530 Clientid:01:52:54:00:34:98:24}
	I1216 21:20:01.541677   68111 main.go:141] libmachine: (newest-cni-194530) DBG | skip adding static IP to network mk-newest-cni-194530 - found existing host DHCP lease matching {name: "newest-cni-194530", mac: "52:54:00:34:98:24", ip: "192.168.72.84"}
	I1216 21:20:01.541694   68111 main.go:141] libmachine: (newest-cni-194530) DBG | Getting to WaitForSSH function...
	I1216 21:20:01.544307   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:20:01.544681   68111 main.go:141] libmachine: (newest-cni-194530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:98:24", ip: ""} in network mk-newest-cni-194530: {Iface:virbr4 ExpiryTime:2024-12-16 22:19:55 +0000 UTC Type:0 Mac:52:54:00:34:98:24 Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:newest-cni-194530 Clientid:01:52:54:00:34:98:24}
	I1216 21:20:01.544714   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined IP address 192.168.72.84 and MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:20:01.544834   68111 main.go:141] libmachine: (newest-cni-194530) DBG | Using SSH client type: external
	I1216 21:20:01.544854   68111 main.go:141] libmachine: (newest-cni-194530) DBG | Using SSH private key: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/newest-cni-194530/id_rsa (-rw-------)
	I1216 21:20:01.544896   68111 main.go:141] libmachine: (newest-cni-194530) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.84 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20091-7083/.minikube/machines/newest-cni-194530/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1216 21:20:01.544907   68111 main.go:141] libmachine: (newest-cni-194530) DBG | About to run SSH command:
	I1216 21:20:01.544916   68111 main.go:141] libmachine: (newest-cni-194530) DBG | exit 0
	I1216 21:20:01.680289   68111 main.go:141] libmachine: (newest-cni-194530) DBG | SSH cmd err, output: <nil>: 
	I1216 21:20:01.680702   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetConfigRaw
	I1216 21:20:01.681467   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetIP
	I1216 21:20:01.685013   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:20:01.685503   68111 main.go:141] libmachine: (newest-cni-194530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:98:24", ip: ""} in network mk-newest-cni-194530: {Iface:virbr4 ExpiryTime:2024-12-16 22:19:55 +0000 UTC Type:0 Mac:52:54:00:34:98:24 Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:newest-cni-194530 Clientid:01:52:54:00:34:98:24}
	I1216 21:20:01.685540   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined IP address 192.168.72.84 and MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:20:01.685842   68111 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/newest-cni-194530/config.json ...
	I1216 21:20:01.686063   68111 machine.go:93] provisionDockerMachine start ...
	I1216 21:20:01.686084   68111 main.go:141] libmachine: (newest-cni-194530) Calling .DriverName
	I1216 21:20:01.686348   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetSSHHostname
	I1216 21:20:01.688519   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:20:01.688869   68111 main.go:141] libmachine: (newest-cni-194530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:98:24", ip: ""} in network mk-newest-cni-194530: {Iface:virbr4 ExpiryTime:2024-12-16 22:19:55 +0000 UTC Type:0 Mac:52:54:00:34:98:24 Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:newest-cni-194530 Clientid:01:52:54:00:34:98:24}
	I1216 21:20:01.688897   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined IP address 192.168.72.84 and MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:20:01.689019   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetSSHPort
	I1216 21:20:01.689257   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetSSHKeyPath
	I1216 21:20:01.689453   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetSSHKeyPath
	I1216 21:20:01.689622   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetSSHUsername
	I1216 21:20:01.689798   68111 main.go:141] libmachine: Using SSH client type: native
	I1216 21:20:01.690026   68111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.84 22 <nil> <nil>}
	I1216 21:20:01.690038   68111 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 21:20:01.804070   68111 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1216 21:20:01.804106   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetMachineName
	I1216 21:20:01.804367   68111 buildroot.go:166] provisioning hostname "newest-cni-194530"
	I1216 21:20:01.804423   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetMachineName
	I1216 21:20:01.804646   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetSSHHostname
	I1216 21:20:01.807556   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:20:01.807932   68111 main.go:141] libmachine: (newest-cni-194530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:98:24", ip: ""} in network mk-newest-cni-194530: {Iface:virbr4 ExpiryTime:2024-12-16 22:19:55 +0000 UTC Type:0 Mac:52:54:00:34:98:24 Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:newest-cni-194530 Clientid:01:52:54:00:34:98:24}
	I1216 21:20:01.807955   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined IP address 192.168.72.84 and MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:20:01.808264   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetSSHPort
	I1216 21:20:01.808486   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetSSHKeyPath
	I1216 21:20:01.808691   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetSSHKeyPath
	I1216 21:20:01.808871   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetSSHUsername
	I1216 21:20:01.809052   68111 main.go:141] libmachine: Using SSH client type: native
	I1216 21:20:01.809280   68111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.84 22 <nil> <nil>}
	I1216 21:20:01.809304   68111 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-194530 && echo "newest-cni-194530" | sudo tee /etc/hostname
	I1216 21:20:01.942363   68111 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-194530
	
	I1216 21:20:01.942398   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetSSHHostname
	I1216 21:20:01.945460   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:20:01.946017   68111 main.go:141] libmachine: (newest-cni-194530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:98:24", ip: ""} in network mk-newest-cni-194530: {Iface:virbr4 ExpiryTime:2024-12-16 22:19:55 +0000 UTC Type:0 Mac:52:54:00:34:98:24 Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:newest-cni-194530 Clientid:01:52:54:00:34:98:24}
	I1216 21:20:01.946068   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined IP address 192.168.72.84 and MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:20:01.946375   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetSSHPort
	I1216 21:20:01.946640   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetSSHKeyPath
	I1216 21:20:01.946895   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetSSHKeyPath
	I1216 21:20:01.947108   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetSSHUsername
	I1216 21:20:01.947354   68111 main.go:141] libmachine: Using SSH client type: native
	I1216 21:20:01.947543   68111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.84 22 <nil> <nil>}
	I1216 21:20:01.947561   68111 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-194530' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-194530/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-194530' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 21:20:02.074240   68111 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 21:20:02.074272   68111 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20091-7083/.minikube CaCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20091-7083/.minikube}
	I1216 21:20:02.074309   68111 buildroot.go:174] setting up certificates
	I1216 21:20:02.074317   68111 provision.go:84] configureAuth start
	I1216 21:20:02.074326   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetMachineName
	I1216 21:20:02.074675   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetIP
	I1216 21:20:02.077413   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:20:02.077807   68111 main.go:141] libmachine: (newest-cni-194530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:98:24", ip: ""} in network mk-newest-cni-194530: {Iface:virbr4 ExpiryTime:2024-12-16 22:19:55 +0000 UTC Type:0 Mac:52:54:00:34:98:24 Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:newest-cni-194530 Clientid:01:52:54:00:34:98:24}
	I1216 21:20:02.077839   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined IP address 192.168.72.84 and MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:20:02.078090   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetSSHHostname
	I1216 21:20:02.080517   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:20:02.080870   68111 main.go:141] libmachine: (newest-cni-194530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:98:24", ip: ""} in network mk-newest-cni-194530: {Iface:virbr4 ExpiryTime:2024-12-16 22:19:55 +0000 UTC Type:0 Mac:52:54:00:34:98:24 Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:newest-cni-194530 Clientid:01:52:54:00:34:98:24}
	I1216 21:20:02.080891   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined IP address 192.168.72.84 and MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:20:02.081094   68111 provision.go:143] copyHostCerts
	I1216 21:20:02.081159   68111 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem, removing ...
	I1216 21:20:02.081183   68111 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem
	I1216 21:20:02.081255   68111 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem (1082 bytes)
	I1216 21:20:02.081360   68111 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem, removing ...
	I1216 21:20:02.081368   68111 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem
	I1216 21:20:02.081394   68111 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem (1123 bytes)
	I1216 21:20:02.081463   68111 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem, removing ...
	I1216 21:20:02.081471   68111 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem
	I1216 21:20:02.081492   68111 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem (1679 bytes)
	I1216 21:20:02.081550   68111 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem org=jenkins.newest-cni-194530 san=[127.0.0.1 192.168.72.84 localhost minikube newest-cni-194530]
	I1216 21:20:02.192338   68111 provision.go:177] copyRemoteCerts
	I1216 21:20:02.192424   68111 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 21:20:02.192464   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetSSHHostname
	I1216 21:20:02.195418   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:20:02.195804   68111 main.go:141] libmachine: (newest-cni-194530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:98:24", ip: ""} in network mk-newest-cni-194530: {Iface:virbr4 ExpiryTime:2024-12-16 22:19:55 +0000 UTC Type:0 Mac:52:54:00:34:98:24 Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:newest-cni-194530 Clientid:01:52:54:00:34:98:24}
	I1216 21:20:02.195838   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined IP address 192.168.72.84 and MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:20:02.196077   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetSSHPort
	I1216 21:20:02.196305   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetSSHKeyPath
	I1216 21:20:02.196472   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetSSHUsername
	I1216 21:20:02.196635   68111 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/newest-cni-194530/id_rsa Username:docker}
	I1216 21:20:02.287413   68111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 21:20:02.314914   68111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 21:20:02.344279   68111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 21:20:02.374181   68111 provision.go:87] duration metric: took 299.849841ms to configureAuth
	I1216 21:20:02.374212   68111 buildroot.go:189] setting minikube options for container-runtime
	I1216 21:20:02.374427   68111 config.go:182] Loaded profile config "newest-cni-194530": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 21:20:02.374554   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetSSHHostname
	I1216 21:20:02.377600   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:20:02.378053   68111 main.go:141] libmachine: (newest-cni-194530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:98:24", ip: ""} in network mk-newest-cni-194530: {Iface:virbr4 ExpiryTime:2024-12-16 22:19:55 +0000 UTC Type:0 Mac:52:54:00:34:98:24 Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:newest-cni-194530 Clientid:01:52:54:00:34:98:24}
	I1216 21:20:02.378087   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined IP address 192.168.72.84 and MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:20:02.378247   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetSSHPort
	I1216 21:20:02.378494   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetSSHKeyPath
	I1216 21:20:02.378683   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetSSHKeyPath
	I1216 21:20:02.378869   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetSSHUsername
	I1216 21:20:02.379042   68111 main.go:141] libmachine: Using SSH client type: native
	I1216 21:20:02.379295   68111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.84 22 <nil> <nil>}
	I1216 21:20:02.379313   68111 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 21:20:02.639738   68111 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 21:20:02.639765   68111 machine.go:96] duration metric: took 953.687711ms to provisionDockerMachine
	I1216 21:20:02.639780   68111 start.go:293] postStartSetup for "newest-cni-194530" (driver="kvm2")
	I1216 21:20:02.639793   68111 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 21:20:02.639842   68111 main.go:141] libmachine: (newest-cni-194530) Calling .DriverName
	I1216 21:20:02.640228   68111 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 21:20:02.640274   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetSSHHostname
	I1216 21:20:02.642952   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:20:02.643401   68111 main.go:141] libmachine: (newest-cni-194530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:98:24", ip: ""} in network mk-newest-cni-194530: {Iface:virbr4 ExpiryTime:2024-12-16 22:19:55 +0000 UTC Type:0 Mac:52:54:00:34:98:24 Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:newest-cni-194530 Clientid:01:52:54:00:34:98:24}
	I1216 21:20:02.643434   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined IP address 192.168.72.84 and MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:20:02.643632   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetSSHPort
	I1216 21:20:02.643855   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetSSHKeyPath
	I1216 21:20:02.644107   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetSSHUsername
	I1216 21:20:02.644312   68111 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/newest-cni-194530/id_rsa Username:docker}
	I1216 21:20:02.731578   68111 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 21:20:02.736565   68111 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 21:20:02.736596   68111 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/addons for local assets ...
	I1216 21:20:02.736670   68111 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/files for local assets ...
	I1216 21:20:02.736777   68111 filesync.go:149] local asset: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem -> 142542.pem in /etc/ssl/certs
	I1216 21:20:02.736897   68111 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 21:20:02.747897   68111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /etc/ssl/certs/142542.pem (1708 bytes)
	I1216 21:20:02.777536   68111 start.go:296] duration metric: took 137.74066ms for postStartSetup
	I1216 21:20:02.777588   68111 fix.go:56] duration metric: took 19.728405636s for fixHost
	I1216 21:20:02.777614   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetSSHHostname
	I1216 21:20:02.780645   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:20:02.781017   68111 main.go:141] libmachine: (newest-cni-194530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:98:24", ip: ""} in network mk-newest-cni-194530: {Iface:virbr4 ExpiryTime:2024-12-16 22:19:55 +0000 UTC Type:0 Mac:52:54:00:34:98:24 Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:newest-cni-194530 Clientid:01:52:54:00:34:98:24}
	I1216 21:20:02.781051   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined IP address 192.168.72.84 and MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:20:02.781285   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetSSHPort
	I1216 21:20:02.781523   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetSSHKeyPath
	I1216 21:20:02.781716   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetSSHKeyPath
	I1216 21:20:02.781875   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetSSHUsername
	I1216 21:20:02.782075   68111 main.go:141] libmachine: Using SSH client type: native
	I1216 21:20:02.782309   68111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.84 22 <nil> <nil>}
	I1216 21:20:02.782327   68111 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 21:20:02.896913   68111 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734384002.867132817
	
	I1216 21:20:02.896941   68111 fix.go:216] guest clock: 1734384002.867132817
	I1216 21:20:02.896950   68111 fix.go:229] Guest: 2024-12-16 21:20:02.867132817 +0000 UTC Remote: 2024-12-16 21:20:02.777593584 +0000 UTC m=+19.889901731 (delta=89.539233ms)
	I1216 21:20:02.896970   68111 fix.go:200] guest clock delta is within tolerance: 89.539233ms
	I1216 21:20:02.896976   68111 start.go:83] releasing machines lock for "newest-cni-194530", held for 19.84781263s
	I1216 21:20:02.897018   68111 main.go:141] libmachine: (newest-cni-194530) Calling .DriverName
	I1216 21:20:02.897343   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetIP
	I1216 21:20:02.900740   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:20:02.901152   68111 main.go:141] libmachine: (newest-cni-194530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:98:24", ip: ""} in network mk-newest-cni-194530: {Iface:virbr4 ExpiryTime:2024-12-16 22:19:55 +0000 UTC Type:0 Mac:52:54:00:34:98:24 Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:newest-cni-194530 Clientid:01:52:54:00:34:98:24}
	I1216 21:20:02.901182   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined IP address 192.168.72.84 and MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:20:02.901393   68111 main.go:141] libmachine: (newest-cni-194530) Calling .DriverName
	I1216 21:20:02.901947   68111 main.go:141] libmachine: (newest-cni-194530) Calling .DriverName
	I1216 21:20:02.902154   68111 main.go:141] libmachine: (newest-cni-194530) Calling .DriverName
	I1216 21:20:02.902290   68111 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 21:20:02.902335   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetSSHHostname
	I1216 21:20:02.902353   68111 ssh_runner.go:195] Run: cat /version.json
	I1216 21:20:02.902379   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetSSHHostname
	I1216 21:20:02.905437   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:20:02.905465   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:20:02.905848   68111 main.go:141] libmachine: (newest-cni-194530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:98:24", ip: ""} in network mk-newest-cni-194530: {Iface:virbr4 ExpiryTime:2024-12-16 22:19:55 +0000 UTC Type:0 Mac:52:54:00:34:98:24 Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:newest-cni-194530 Clientid:01:52:54:00:34:98:24}
	I1216 21:20:02.905887   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined IP address 192.168.72.84 and MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:20:02.905921   68111 main.go:141] libmachine: (newest-cni-194530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:98:24", ip: ""} in network mk-newest-cni-194530: {Iface:virbr4 ExpiryTime:2024-12-16 22:19:55 +0000 UTC Type:0 Mac:52:54:00:34:98:24 Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:newest-cni-194530 Clientid:01:52:54:00:34:98:24}
	I1216 21:20:02.905937   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined IP address 192.168.72.84 and MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:20:02.906130   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetSSHPort
	I1216 21:20:02.906328   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetSSHPort
	I1216 21:20:02.906348   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetSSHKeyPath
	I1216 21:20:02.906536   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetSSHUsername
	I1216 21:20:02.906575   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetSSHKeyPath
	I1216 21:20:02.906737   68111 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/newest-cni-194530/id_rsa Username:docker}
	I1216 21:20:02.906755   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetSSHUsername
	I1216 21:20:02.906921   68111 sshutil.go:53] new ssh client: &{IP:192.168.72.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/newest-cni-194530/id_rsa Username:docker}
	I1216 21:20:03.030905   68111 ssh_runner.go:195] Run: systemctl --version
	I1216 21:20:03.037958   68111 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 21:20:03.186349   68111 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 21:20:03.193871   68111 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 21:20:03.193937   68111 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 21:20:03.212764   68111 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 21:20:03.212796   68111 start.go:495] detecting cgroup driver to use...
	I1216 21:20:03.212872   68111 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 21:20:03.232597   68111 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 21:20:03.249423   68111 docker.go:217] disabling cri-docker service (if available) ...
	I1216 21:20:03.249491   68111 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 21:20:03.267751   68111 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 21:20:03.284531   68111 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 21:20:03.414089   68111 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 21:20:03.568374   68111 docker.go:233] disabling docker service ...
	I1216 21:20:03.568437   68111 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 21:20:03.585650   68111 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 21:20:03.600486   68111 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 21:20:03.734547   68111 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 21:20:03.854686   68111 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 21:20:03.870753   68111 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 21:20:03.892922   68111 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1216 21:20:03.893005   68111 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:20:03.907462   68111 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 21:20:03.907547   68111 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:20:03.919164   68111 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:20:03.931102   68111 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:20:03.943686   68111 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 21:20:03.956393   68111 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:20:03.970772   68111 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:20:03.991875   68111 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:20:04.004128   68111 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 21:20:04.016254   68111 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 21:20:04.016316   68111 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 21:20:04.031699   68111 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 21:20:04.044278   68111 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 21:20:04.163972   68111 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 21:20:04.270397   68111 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 21:20:04.270478   68111 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 21:20:04.275870   68111 start.go:563] Will wait 60s for crictl version
	I1216 21:20:04.275929   68111 ssh_runner.go:195] Run: which crictl
	I1216 21:20:04.281175   68111 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 21:20:04.331435   68111 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 21:20:04.331529   68111 ssh_runner.go:195] Run: crio --version
	I1216 21:20:04.368558   68111 ssh_runner.go:195] Run: crio --version
	I1216 21:20:04.402939   68111 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1216 21:20:04.404347   68111 main.go:141] libmachine: (newest-cni-194530) Calling .GetIP
	I1216 21:20:04.407085   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:20:04.407538   68111 main.go:141] libmachine: (newest-cni-194530) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:98:24", ip: ""} in network mk-newest-cni-194530: {Iface:virbr4 ExpiryTime:2024-12-16 22:19:55 +0000 UTC Type:0 Mac:52:54:00:34:98:24 Iaid: IPaddr:192.168.72.84 Prefix:24 Hostname:newest-cni-194530 Clientid:01:52:54:00:34:98:24}
	I1216 21:20:04.407591   68111 main.go:141] libmachine: (newest-cni-194530) DBG | domain newest-cni-194530 has defined IP address 192.168.72.84 and MAC address 52:54:00:34:98:24 in network mk-newest-cni-194530
	I1216 21:20:04.407786   68111 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1216 21:20:04.412609   68111 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 21:20:04.428151   68111 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1216 21:20:04.429579   68111 kubeadm.go:883] updating cluster {Name:newest-cni-194530 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.32.0 ClusterName:newest-cni-194530 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.84 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<n
il> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 21:20:04.429717   68111 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 21:20:04.429773   68111 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 21:20:04.475869   68111 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1216 21:20:04.475944   68111 ssh_runner.go:195] Run: which lz4
	I1216 21:20:04.480916   68111 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 21:20:04.485999   68111 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 21:20:04.486043   68111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1216 21:20:06.115176   68111 crio.go:462] duration metric: took 1.634289012s to copy over tarball
	I1216 21:20:06.115331   68111 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 21:20:08.578031   68111 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.462666704s)
	I1216 21:20:08.578066   68111 crio.go:469] duration metric: took 2.462858792s to extract the tarball
	I1216 21:20:08.578074   68111 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 21:20:08.616735   68111 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 21:20:08.668810   68111 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 21:20:08.668835   68111 cache_images.go:84] Images are preloaded, skipping loading
	I1216 21:20:08.668843   68111 kubeadm.go:934] updating node { 192.168.72.84 8443 v1.32.0 crio true true} ...
	I1216 21:20:08.668953   68111 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-194530 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.84
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:newest-cni-194530 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 21:20:08.669036   68111 ssh_runner.go:195] Run: crio config
	I1216 21:20:08.721427   68111 cni.go:84] Creating CNI manager for ""
	I1216 21:20:08.721455   68111 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:20:08.721466   68111 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1216 21:20:08.721491   68111 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.84 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-194530 NodeName:newest-cni-194530 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.84"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.84 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 21:20:08.721627   68111 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.84
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-194530"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.84"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.84"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 21:20:08.721703   68111 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1216 21:20:08.733578   68111 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 21:20:08.733647   68111 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 21:20:08.745949   68111 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1216 21:20:08.766859   68111 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 21:20:08.786073   68111 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I1216 21:20:08.807511   68111 ssh_runner.go:195] Run: grep 192.168.72.84	control-plane.minikube.internal$ /etc/hosts
	I1216 21:20:08.811812   68111 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.84	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 21:20:08.825625   68111 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 21:20:08.977775   68111 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 21:20:08.997199   68111 certs.go:68] Setting up /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/newest-cni-194530 for IP: 192.168.72.84
	I1216 21:20:08.997234   68111 certs.go:194] generating shared ca certs ...
	I1216 21:20:08.997257   68111 certs.go:226] acquiring lock for ca certs: {Name:mk7f8f83a04be3d39897a025f51d4d8228b5a509 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:20:08.997444   68111 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key
	I1216 21:20:08.997501   68111 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key
	I1216 21:20:08.997516   68111 certs.go:256] generating profile certs ...
	I1216 21:20:08.997650   68111 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/newest-cni-194530/client.key
	I1216 21:20:08.997764   68111 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/newest-cni-194530/apiserver.key.4487992d
	I1216 21:20:08.997822   68111 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/newest-cni-194530/proxy-client.key
	I1216 21:20:08.997997   68111 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem (1338 bytes)
	W1216 21:20:08.998046   68111 certs.go:480] ignoring /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254_empty.pem, impossibly tiny 0 bytes
	I1216 21:20:08.998062   68111 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 21:20:08.998098   68111 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem (1082 bytes)
	I1216 21:20:08.998129   68111 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem (1123 bytes)
	I1216 21:20:08.998167   68111 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem (1679 bytes)
	I1216 21:20:08.998226   68111 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem (1708 bytes)
	I1216 21:20:08.999104   68111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 21:20:09.046219   68111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 21:20:09.096478   68111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 21:20:09.143580   68111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 21:20:09.176587   68111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/newest-cni-194530/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 21:20:09.217051   68111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/newest-cni-194530/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 21:20:09.247084   68111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/newest-cni-194530/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 21:20:09.277753   68111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/newest-cni-194530/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 21:20:09.308390   68111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 21:20:09.336693   68111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem --> /usr/share/ca-certificates/14254.pem (1338 bytes)
	I1216 21:20:09.365384   68111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /usr/share/ca-certificates/142542.pem (1708 bytes)
	I1216 21:20:09.393024   68111 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 21:20:09.413856   68111 ssh_runner.go:195] Run: openssl version
	I1216 21:20:09.420156   68111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 21:20:09.433401   68111 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:20:09.439148   68111 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:20:09.439223   68111 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:20:09.446660   68111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 21:20:09.460312   68111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14254.pem && ln -fs /usr/share/ca-certificates/14254.pem /etc/ssl/certs/14254.pem"
	I1216 21:20:09.473564   68111 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14254.pem
	I1216 21:20:09.479135   68111 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 19:42 /usr/share/ca-certificates/14254.pem
	I1216 21:20:09.479209   68111 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14254.pem
	I1216 21:20:09.485900   68111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14254.pem /etc/ssl/certs/51391683.0"
	I1216 21:20:09.499570   68111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142542.pem && ln -fs /usr/share/ca-certificates/142542.pem /etc/ssl/certs/142542.pem"
	I1216 21:20:09.512613   68111 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142542.pem
	I1216 21:20:09.517998   68111 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 19:42 /usr/share/ca-certificates/142542.pem
	I1216 21:20:09.518065   68111 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142542.pem
	I1216 21:20:09.524673   68111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142542.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 21:20:09.537356   68111 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 21:20:09.543433   68111 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 21:20:09.551210   68111 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 21:20:09.558297   68111 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 21:20:09.565787   68111 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 21:20:09.572682   68111 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 21:20:09.580152   68111 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 21:20:09.587109   68111 kubeadm.go:392] StartCluster: {Name:newest-cni-194530 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32
.0 ClusterName:newest-cni-194530 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.84 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil>
ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 21:20:09.587234   68111 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 21:20:09.587357   68111 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 21:20:09.634910   68111 cri.go:89] found id: ""
	I1216 21:20:09.634985   68111 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 21:20:09.647066   68111 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1216 21:20:09.647099   68111 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1216 21:20:09.647177   68111 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 21:20:09.658149   68111 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 21:20:09.659232   68111 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-194530" does not appear in /home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 21:20:09.659875   68111 kubeconfig.go:62] /home/jenkins/minikube-integration/20091-7083/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-194530" cluster setting kubeconfig missing "newest-cni-194530" context setting]
	I1216 21:20:09.660788   68111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/kubeconfig: {Name:mk67073c6dc9abd712825d4490d6430745897f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:20:09.725965   68111 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 21:20:09.737618   68111 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.84
	I1216 21:20:09.737665   68111 kubeadm.go:1160] stopping kube-system containers ...
	I1216 21:20:09.737683   68111 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 21:20:09.737770   68111 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 21:20:09.778992   68111 cri.go:89] found id: ""
	I1216 21:20:09.779081   68111 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 21:20:09.798277   68111 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:20:09.810439   68111 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:20:09.810473   68111 kubeadm.go:157] found existing configuration files:
	
	I1216 21:20:09.810537   68111 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 21:20:09.821206   68111 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:20:09.821269   68111 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:20:09.832832   68111 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 21:20:09.846954   68111 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:20:09.847038   68111 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:20:09.860058   68111 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 21:20:09.871597   68111 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:20:09.871661   68111 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:20:09.883551   68111 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 21:20:09.894753   68111 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:20:09.894829   68111 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:20:09.908955   68111 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 21:20:09.920599   68111 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:20:10.050438   68111 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:20:10.982095   68111 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:20:11.227714   68111 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:20:11.318956   68111 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:20:11.415436   68111 api_server.go:52] waiting for apiserver process to appear ...
	I1216 21:20:11.415549   68111 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:20:11.916190   68111 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:20:12.416063   68111 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:20:12.489719   68111 api_server.go:72] duration metric: took 1.074281444s to wait for apiserver process to appear ...
	I1216 21:20:12.489750   68111 api_server.go:88] waiting for apiserver healthz status ...
	I1216 21:20:12.489776   68111 api_server.go:253] Checking apiserver healthz at https://192.168.72.84:8443/healthz ...
	I1216 21:20:12.490325   68111 api_server.go:269] stopped: https://192.168.72.84:8443/healthz: Get "https://192.168.72.84:8443/healthz": dial tcp 192.168.72.84:8443: connect: connection refused
	
	
	==> CRI-O <==
	Dec 16 21:20:15 no-preload-232338 crio[723]: time="2024-12-16 21:20:15.140383272Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734384015140282874,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100999,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f9b27ebf-bafe-49e1-b4c1-82b97d4276f0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:20:15 no-preload-232338 crio[723]: time="2024-12-16 21:20:15.145195462Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9c45ad57-333a-4bbb-ab79-5fc7730e6cfe name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:20:15 no-preload-232338 crio[723]: time="2024-12-16 21:20:15.145271024Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9c45ad57-333a-4bbb-ab79-5fc7730e6cfe name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:20:15 no-preload-232338 crio[723]: time="2024-12-16 21:20:15.145500064Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dcd2618255da99a588a2bbff1366ef3ae7975c5b7427ce4189ef2c5fd444ba69,PodSandboxId:4a027b6736fd003341c3b022ecb72f4f64a4e5defb13a88abc98e21dd788c0bc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734383100726836966,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b742666-dfd4-4c9b-95a9-25367ec2a718,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f93fa31c7526abdf13792a5cfc284dc96b300ca36237c5d3d16c389ba6b4b224,PodSandboxId:65da06c38941b22dfca9fe46390b838efe27c4632ba3e624580b97f346eee2e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383100350440496,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-4wwvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c63ab10-dfdd-4aca-b39f-bc9b0e028e5e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb3f8053812ebdd6ac1c1b3990ea33cff2d03a50d43c7f54310432574636e2a6,PodSandboxId:5d2b63620968f65012abd2f9f158fabb1cc6a4681aab5225078b4706815f5f06,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383100100017809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-c4qfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9
bf3125-1e6d-4794-a2e6-2ff7ed5132b1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ca52a5e130b887ba85db3ae0ebc536eabd241b8b47bd574e2312e53de9ed7e6,PodSandboxId:6bc53584513262f57a230b5ca6bd863547fef49c2bb8a3889b4b264e6e89075d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:
1734383099371151110,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5hq8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0d357a-dda2-4508-a954-5c67eaf5b8ac,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f644bafa71082b2f43c37e4984bcf95201973c749ec322d44cf504a64879cf1d,PodSandboxId:2a9a6364a517696e5951ef6bd5c50ed7cc3fb1a61088318b4a9964e81c900bf8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1734383088510061093,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31d0531328a1e22e77c38d5296534b60,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d80b96bf35cfc3a12f52bfdb0f2e4eae378235fd658763bc0054a669a0e7919a,PodSandboxId:e40606d090d61ee9e28ab4fbfec4316a013f9eb9e3c827afd055c3cfc5929844,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:173438308846219
1813,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5f24463af3d3cd6c412e107e62d9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18cb850ac82ce6321ec8c820d2b187338227813eb490151ac8c15b3c8185fc60,PodSandboxId:45454fd600089289d1da856ebc8e119c0fb670019408f2221c8547d8eb4dc690,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1734383088387050693,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a928546caa71eb5802e4715858850ef,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:385603c4d7165566e9b078308e8ed0ab97e4f8623edef92149ca1315ea5bcecd,PodSandboxId:d0ad047ea69298677e8b3012f5974254f73924ffe29df1b7d429392a5a03a9dc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1734383088345861879,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d61d90d3fc49432c3d4314e8cdc6846,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54a482a8f0d22fa1dbe26bce347ead5e18fdbf256a99bfe5ede3c5c070c44e8c,PodSandboxId:63e38a0cdd4ab48bf512e430486626bc6ba5d8812126d7f48544b696d00fe7c6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1734382800623517087,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a928546caa71eb5802e4715858850ef,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9c45ad57-333a-4bbb-ab79-5fc7730e6cfe name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:20:15 no-preload-232338 crio[723]: time="2024-12-16 21:20:15.191707845Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0da46d0f-c268-4190-bf56-aed3b8de2969 name=/runtime.v1.RuntimeService/Version
	Dec 16 21:20:15 no-preload-232338 crio[723]: time="2024-12-16 21:20:15.191782656Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0da46d0f-c268-4190-bf56-aed3b8de2969 name=/runtime.v1.RuntimeService/Version
	Dec 16 21:20:15 no-preload-232338 crio[723]: time="2024-12-16 21:20:15.193184960Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a6063aff-30e1-4326-82c9-e45ad2bbed06 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:20:15 no-preload-232338 crio[723]: time="2024-12-16 21:20:15.193860720Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734384015193825913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100999,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a6063aff-30e1-4326-82c9-e45ad2bbed06 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:20:15 no-preload-232338 crio[723]: time="2024-12-16 21:20:15.194962465Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b762b200-2a36-4be7-922e-a31903671909 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:20:15 no-preload-232338 crio[723]: time="2024-12-16 21:20:15.195032034Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b762b200-2a36-4be7-922e-a31903671909 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:20:15 no-preload-232338 crio[723]: time="2024-12-16 21:20:15.195252791Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dcd2618255da99a588a2bbff1366ef3ae7975c5b7427ce4189ef2c5fd444ba69,PodSandboxId:4a027b6736fd003341c3b022ecb72f4f64a4e5defb13a88abc98e21dd788c0bc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734383100726836966,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b742666-dfd4-4c9b-95a9-25367ec2a718,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f93fa31c7526abdf13792a5cfc284dc96b300ca36237c5d3d16c389ba6b4b224,PodSandboxId:65da06c38941b22dfca9fe46390b838efe27c4632ba3e624580b97f346eee2e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383100350440496,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-4wwvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c63ab10-dfdd-4aca-b39f-bc9b0e028e5e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb3f8053812ebdd6ac1c1b3990ea33cff2d03a50d43c7f54310432574636e2a6,PodSandboxId:5d2b63620968f65012abd2f9f158fabb1cc6a4681aab5225078b4706815f5f06,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383100100017809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-c4qfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9
bf3125-1e6d-4794-a2e6-2ff7ed5132b1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ca52a5e130b887ba85db3ae0ebc536eabd241b8b47bd574e2312e53de9ed7e6,PodSandboxId:6bc53584513262f57a230b5ca6bd863547fef49c2bb8a3889b4b264e6e89075d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:
1734383099371151110,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5hq8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0d357a-dda2-4508-a954-5c67eaf5b8ac,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f644bafa71082b2f43c37e4984bcf95201973c749ec322d44cf504a64879cf1d,PodSandboxId:2a9a6364a517696e5951ef6bd5c50ed7cc3fb1a61088318b4a9964e81c900bf8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1734383088510061093,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31d0531328a1e22e77c38d5296534b60,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d80b96bf35cfc3a12f52bfdb0f2e4eae378235fd658763bc0054a669a0e7919a,PodSandboxId:e40606d090d61ee9e28ab4fbfec4316a013f9eb9e3c827afd055c3cfc5929844,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:173438308846219
1813,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5f24463af3d3cd6c412e107e62d9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18cb850ac82ce6321ec8c820d2b187338227813eb490151ac8c15b3c8185fc60,PodSandboxId:45454fd600089289d1da856ebc8e119c0fb670019408f2221c8547d8eb4dc690,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1734383088387050693,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a928546caa71eb5802e4715858850ef,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:385603c4d7165566e9b078308e8ed0ab97e4f8623edef92149ca1315ea5bcecd,PodSandboxId:d0ad047ea69298677e8b3012f5974254f73924ffe29df1b7d429392a5a03a9dc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1734383088345861879,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d61d90d3fc49432c3d4314e8cdc6846,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54a482a8f0d22fa1dbe26bce347ead5e18fdbf256a99bfe5ede3c5c070c44e8c,PodSandboxId:63e38a0cdd4ab48bf512e430486626bc6ba5d8812126d7f48544b696d00fe7c6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1734382800623517087,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a928546caa71eb5802e4715858850ef,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b762b200-2a36-4be7-922e-a31903671909 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:20:15 no-preload-232338 crio[723]: time="2024-12-16 21:20:15.235961243Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c45aa86a-1af5-4a94-be91-817c72288b9e name=/runtime.v1.RuntimeService/Version
	Dec 16 21:20:15 no-preload-232338 crio[723]: time="2024-12-16 21:20:15.236055683Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c45aa86a-1af5-4a94-be91-817c72288b9e name=/runtime.v1.RuntimeService/Version
	Dec 16 21:20:15 no-preload-232338 crio[723]: time="2024-12-16 21:20:15.237624744Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ba170432-2d1d-4998-b14c-ef7005805308 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:20:15 no-preload-232338 crio[723]: time="2024-12-16 21:20:15.237971052Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734384015237949142,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100999,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ba170432-2d1d-4998-b14c-ef7005805308 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:20:15 no-preload-232338 crio[723]: time="2024-12-16 21:20:15.238523617Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=63643c5e-47b7-4df0-8b9f-1e3c49e7913b name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:20:15 no-preload-232338 crio[723]: time="2024-12-16 21:20:15.238575987Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=63643c5e-47b7-4df0-8b9f-1e3c49e7913b name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:20:15 no-preload-232338 crio[723]: time="2024-12-16 21:20:15.238779658Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dcd2618255da99a588a2bbff1366ef3ae7975c5b7427ce4189ef2c5fd444ba69,PodSandboxId:4a027b6736fd003341c3b022ecb72f4f64a4e5defb13a88abc98e21dd788c0bc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734383100726836966,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b742666-dfd4-4c9b-95a9-25367ec2a718,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f93fa31c7526abdf13792a5cfc284dc96b300ca36237c5d3d16c389ba6b4b224,PodSandboxId:65da06c38941b22dfca9fe46390b838efe27c4632ba3e624580b97f346eee2e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383100350440496,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-4wwvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c63ab10-dfdd-4aca-b39f-bc9b0e028e5e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb3f8053812ebdd6ac1c1b3990ea33cff2d03a50d43c7f54310432574636e2a6,PodSandboxId:5d2b63620968f65012abd2f9f158fabb1cc6a4681aab5225078b4706815f5f06,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383100100017809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-c4qfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9
bf3125-1e6d-4794-a2e6-2ff7ed5132b1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ca52a5e130b887ba85db3ae0ebc536eabd241b8b47bd574e2312e53de9ed7e6,PodSandboxId:6bc53584513262f57a230b5ca6bd863547fef49c2bb8a3889b4b264e6e89075d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:
1734383099371151110,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5hq8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0d357a-dda2-4508-a954-5c67eaf5b8ac,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f644bafa71082b2f43c37e4984bcf95201973c749ec322d44cf504a64879cf1d,PodSandboxId:2a9a6364a517696e5951ef6bd5c50ed7cc3fb1a61088318b4a9964e81c900bf8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1734383088510061093,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31d0531328a1e22e77c38d5296534b60,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d80b96bf35cfc3a12f52bfdb0f2e4eae378235fd658763bc0054a669a0e7919a,PodSandboxId:e40606d090d61ee9e28ab4fbfec4316a013f9eb9e3c827afd055c3cfc5929844,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:173438308846219
1813,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5f24463af3d3cd6c412e107e62d9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18cb850ac82ce6321ec8c820d2b187338227813eb490151ac8c15b3c8185fc60,PodSandboxId:45454fd600089289d1da856ebc8e119c0fb670019408f2221c8547d8eb4dc690,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1734383088387050693,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a928546caa71eb5802e4715858850ef,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:385603c4d7165566e9b078308e8ed0ab97e4f8623edef92149ca1315ea5bcecd,PodSandboxId:d0ad047ea69298677e8b3012f5974254f73924ffe29df1b7d429392a5a03a9dc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1734383088345861879,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d61d90d3fc49432c3d4314e8cdc6846,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54a482a8f0d22fa1dbe26bce347ead5e18fdbf256a99bfe5ede3c5c070c44e8c,PodSandboxId:63e38a0cdd4ab48bf512e430486626bc6ba5d8812126d7f48544b696d00fe7c6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1734382800623517087,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a928546caa71eb5802e4715858850ef,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=63643c5e-47b7-4df0-8b9f-1e3c49e7913b name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:20:15 no-preload-232338 crio[723]: time="2024-12-16 21:20:15.280222815Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4230329b-af28-4f03-9fab-8cc58609063a name=/runtime.v1.RuntimeService/Version
	Dec 16 21:20:15 no-preload-232338 crio[723]: time="2024-12-16 21:20:15.280364674Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4230329b-af28-4f03-9fab-8cc58609063a name=/runtime.v1.RuntimeService/Version
	Dec 16 21:20:15 no-preload-232338 crio[723]: time="2024-12-16 21:20:15.282583716Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b916d3b9-a003-4621-ab09-46e4f583f121 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:20:15 no-preload-232338 crio[723]: time="2024-12-16 21:20:15.283188063Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734384015283144776,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100999,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b916d3b9-a003-4621-ab09-46e4f583f121 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:20:15 no-preload-232338 crio[723]: time="2024-12-16 21:20:15.284110184Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=78a2d6c8-e40a-456c-ba72-151a8d93ba19 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:20:15 no-preload-232338 crio[723]: time="2024-12-16 21:20:15.284191530Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=78a2d6c8-e40a-456c-ba72-151a8d93ba19 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:20:15 no-preload-232338 crio[723]: time="2024-12-16 21:20:15.284608067Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dcd2618255da99a588a2bbff1366ef3ae7975c5b7427ce4189ef2c5fd444ba69,PodSandboxId:4a027b6736fd003341c3b022ecb72f4f64a4e5defb13a88abc98e21dd788c0bc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734383100726836966,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b742666-dfd4-4c9b-95a9-25367ec2a718,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f93fa31c7526abdf13792a5cfc284dc96b300ca36237c5d3d16c389ba6b4b224,PodSandboxId:65da06c38941b22dfca9fe46390b838efe27c4632ba3e624580b97f346eee2e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383100350440496,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-4wwvd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c63ab10-dfdd-4aca-b39f-bc9b0e028e5e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb3f8053812ebdd6ac1c1b3990ea33cff2d03a50d43c7f54310432574636e2a6,PodSandboxId:5d2b63620968f65012abd2f9f158fabb1cc6a4681aab5225078b4706815f5f06,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383100100017809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-c4qfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9
bf3125-1e6d-4794-a2e6-2ff7ed5132b1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ca52a5e130b887ba85db3ae0ebc536eabd241b8b47bd574e2312e53de9ed7e6,PodSandboxId:6bc53584513262f57a230b5ca6bd863547fef49c2bb8a3889b4b264e6e89075d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:
1734383099371151110,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5hq8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0d357a-dda2-4508-a954-5c67eaf5b8ac,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f644bafa71082b2f43c37e4984bcf95201973c749ec322d44cf504a64879cf1d,PodSandboxId:2a9a6364a517696e5951ef6bd5c50ed7cc3fb1a61088318b4a9964e81c900bf8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1734383088510061093,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31d0531328a1e22e77c38d5296534b60,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d80b96bf35cfc3a12f52bfdb0f2e4eae378235fd658763bc0054a669a0e7919a,PodSandboxId:e40606d090d61ee9e28ab4fbfec4316a013f9eb9e3c827afd055c3cfc5929844,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:173438308846219
1813,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5f24463af3d3cd6c412e107e62d9ac,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18cb850ac82ce6321ec8c820d2b187338227813eb490151ac8c15b3c8185fc60,PodSandboxId:45454fd600089289d1da856ebc8e119c0fb670019408f2221c8547d8eb4dc690,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1734383088387050693,Labels:m
ap[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a928546caa71eb5802e4715858850ef,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:385603c4d7165566e9b078308e8ed0ab97e4f8623edef92149ca1315ea5bcecd,PodSandboxId:d0ad047ea69298677e8b3012f5974254f73924ffe29df1b7d429392a5a03a9dc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1734383088345861879,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d61d90d3fc49432c3d4314e8cdc6846,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54a482a8f0d22fa1dbe26bce347ead5e18fdbf256a99bfe5ede3c5c070c44e8c,PodSandboxId:63e38a0cdd4ab48bf512e430486626bc6ba5d8812126d7f48544b696d00fe7c6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1734382800623517087,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-232338,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a928546caa71eb5802e4715858850ef,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=78a2d6c8-e40a-456c-ba72-151a8d93ba19 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dcd2618255da9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   4a027b6736fd0       storage-provisioner
	f93fa31c7526a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   15 minutes ago      Running             coredns                   0                   65da06c38941b       coredns-668d6bf9bc-4wwvd
	eb3f8053812eb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   15 minutes ago      Running             coredns                   0                   5d2b63620968f       coredns-668d6bf9bc-c4qfj
	9ca52a5e130b8       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   15 minutes ago      Running             kube-proxy                0                   6bc5358451326       kube-proxy-m5hq8
	f644bafa71082       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   15 minutes ago      Running             kube-controller-manager   3                   2a9a6364a5176       kube-controller-manager-no-preload-232338
	d80b96bf35cfc       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   15 minutes ago      Running             kube-scheduler            2                   e40606d090d61       kube-scheduler-no-preload-232338
	18cb850ac82ce       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   15 minutes ago      Running             kube-apiserver            3                   45454fd600089       kube-apiserver-no-preload-232338
	385603c4d7165       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   15 minutes ago      Running             etcd                      2                   d0ad047ea6929       etcd-no-preload-232338
	54a482a8f0d22       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   20 minutes ago      Exited              kube-apiserver            2                   63e38a0cdd4ab       kube-apiserver-no-preload-232338
	
	
	==> coredns [eb3f8053812ebdd6ac1c1b3990ea33cff2d03a50d43c7f54310432574636e2a6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [f93fa31c7526abdf13792a5cfc284dc96b300ca36237c5d3d16c389ba6b4b224] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-232338
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-232338
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=74e51ab701402ddc00f8ba70f2a2775c7dcd6477
	                    minikube.k8s.io/name=no-preload-232338
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_16T21_04_54_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Dec 2024 21:04:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-232338
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Dec 2024 21:20:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Dec 2024 21:18:09 +0000   Mon, 16 Dec 2024 21:04:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Dec 2024 21:18:09 +0000   Mon, 16 Dec 2024 21:04:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Dec 2024 21:18:09 +0000   Mon, 16 Dec 2024 21:04:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Dec 2024 21:18:09 +0000   Mon, 16 Dec 2024 21:04:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.240
	  Hostname:    no-preload-232338
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4d6d4b6a19254597baed2d6b2e63d93a
	  System UUID:                4d6d4b6a-1925-4597-baed-2d6b2e63d93a
	  Boot ID:                    c70c7922-4b19-43b3-83da-8cb42766b38e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-4wwvd                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 coredns-668d6bf9bc-c4qfj                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 etcd-no-preload-232338                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kube-apiserver-no-preload-232338             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-no-preload-232338    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-m5hq8                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-no-preload-232338             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 metrics-server-f79f97bbb-l7dcr               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         15m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node no-preload-232338 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node no-preload-232338 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node no-preload-232338 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m   node-controller  Node no-preload-232338 event: Registered Node no-preload-232338 in Controller
	
	
	==> dmesg <==
	[  +4.983868] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.883837] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.605841] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.019208] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.070875] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057104] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.175964] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +0.152133] systemd-fstab-generator[683]: Ignoring "noauto" option for root device
	[  +0.293255] systemd-fstab-generator[713]: Ignoring "noauto" option for root device
	[ +16.626473] systemd-fstab-generator[1314]: Ignoring "noauto" option for root device
	[  +0.068009] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.261031] systemd-fstab-generator[1437]: Ignoring "noauto" option for root device
	[ +23.478187] kauditd_printk_skb: 90 callbacks suppressed
	[Dec16 21:00] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.175760] kauditd_printk_skb: 40 callbacks suppressed
	[ +36.814798] kauditd_printk_skb: 31 callbacks suppressed
	[Dec16 21:04] systemd-fstab-generator[3324]: Ignoring "noauto" option for root device
	[  +0.063880] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.496549] systemd-fstab-generator[3674]: Ignoring "noauto" option for root device
	[  +0.081770] kauditd_printk_skb: 55 callbacks suppressed
	[  +4.907189] systemd-fstab-generator[3787]: Ignoring "noauto" option for root device
	[  +0.095272] kauditd_printk_skb: 12 callbacks suppressed
	[Dec16 21:05] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [385603c4d7165566e9b078308e8ed0ab97e4f8623edef92149ca1315ea5bcecd] <==
	{"level":"info","ts":"2024-12-16T21:04:49.467371Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-16T21:04:49.467899Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-16T21:04:49.468385Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-16T21:04:49.469477Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f821e93ad39fa3f0","local-member-id":"ee01ff8259a5f1e0","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-16T21:04:49.469710Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-16T21:04:49.469761Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-16T21:04:49.470130Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-16T21:04:49.475854Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-16T21:04:49.490484Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-16T21:04:49.490607Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-16T21:04:49.495049Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-16T21:04:49.499991Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.240:2379"}
	{"level":"info","ts":"2024-12-16T21:14:49.689421Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":710}
	{"level":"info","ts":"2024-12-16T21:14:49.699596Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":710,"took":"9.601964ms","hash":1180201028,"current-db-size-bytes":2301952,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2301952,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-12-16T21:14:49.699730Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":1180201028,"revision":710,"compact-revision":-1}
	{"level":"warn","ts":"2024-12-16T21:19:18.378576Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"254.721587ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17429093085017837575 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-7v7vy4n5p3mue2q4ewp7dktovq\" mod_revision:1166 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-7v7vy4n5p3mue2q4ewp7dktovq\" value_size:608 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-7v7vy4n5p3mue2q4ewp7dktovq\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-12-16T21:19:18.378831Z","caller":"traceutil/trace.go:171","msg":"trace[1795055247] transaction","detail":"{read_only:false; response_revision:1175; number_of_response:1; }","duration":"338.560573ms","start":"2024-12-16T21:19:18.040241Z","end":"2024-12-16T21:19:18.378801Z","steps":["trace[1795055247] 'process raft request'  (duration: 82.73088ms)","trace[1795055247] 'compare'  (duration: 254.307506ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-16T21:19:18.378917Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-16T21:19:18.040219Z","time spent":"338.668092ms","remote":"127.0.0.1:45832","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":681,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-7v7vy4n5p3mue2q4ewp7dktovq\" mod_revision:1166 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-7v7vy4n5p3mue2q4ewp7dktovq\" value_size:608 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-7v7vy4n5p3mue2q4ewp7dktovq\" > >"}
	{"level":"warn","ts":"2024-12-16T21:19:18.705054Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.501639ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17429093085017837576 > lease_revoke:<id:71e093d149b48fac>","response":"size:29"}
	{"level":"info","ts":"2024-12-16T21:19:49.699390Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":955}
	{"level":"info","ts":"2024-12-16T21:19:49.703183Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":955,"took":"3.440428ms","hash":178116340,"current-db-size-bytes":2301952,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1609728,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-12-16T21:19:49.703262Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":178116340,"revision":955,"compact-revision":710}
	{"level":"info","ts":"2024-12-16T21:20:10.520478Z","caller":"traceutil/trace.go:171","msg":"trace[1360267869] transaction","detail":"{read_only:false; response_revision:1217; number_of_response:1; }","duration":"201.183633ms","start":"2024-12-16T21:20:10.319260Z","end":"2024-12-16T21:20:10.520444Z","steps":["trace[1360267869] 'process raft request'  (duration: 200.953349ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T21:20:10.864371Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"241.648092ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-16T21:20:10.864469Z","caller":"traceutil/trace.go:171","msg":"trace[895939319] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1217; }","duration":"241.882749ms","start":"2024-12-16T21:20:10.622570Z","end":"2024-12-16T21:20:10.864453Z","steps":["trace[895939319] 'range keys from in-memory index tree'  (duration: 241.626758ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:20:15 up 21 min,  0 users,  load average: 0.06, 0.12, 0.16
	Linux no-preload-232338 5.10.207 #1 SMP Thu Dec 12 23:38:00 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [18cb850ac82ce6321ec8c820d2b187338227813eb490151ac8c15b3c8185fc60] <==
	I1216 21:15:52.241861       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1216 21:15:52.241896       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1216 21:17:52.242122       1 handler_proxy.go:99] no RequestInfo found in the context
	E1216 21:17:52.242222       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1216 21:17:52.242357       1 handler_proxy.go:99] no RequestInfo found in the context
	E1216 21:17:52.242410       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1216 21:17:52.243404       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1216 21:17:52.243473       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1216 21:19:51.244784       1 handler_proxy.go:99] no RequestInfo found in the context
	E1216 21:19:51.244926       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1216 21:19:52.247431       1 handler_proxy.go:99] no RequestInfo found in the context
	E1216 21:19:52.247560       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1216 21:19:52.247902       1 handler_proxy.go:99] no RequestInfo found in the context
	E1216 21:19:52.248164       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1216 21:19:52.248785       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1216 21:19:52.249938       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [54a482a8f0d22fa1dbe26bce347ead5e18fdbf256a99bfe5ede3c5c070c44e8c] <==
	W1216 21:04:40.813411       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:40.821897       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:40.827672       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:40.831069       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:40.852788       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:40.855210       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:40.895165       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:40.898844       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:40.921949       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:40.963188       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:41.016200       1 logging.go:55] [core] [Channel #199 SubChannel #200]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:41.019928       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:41.031971       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:41.157836       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:41.391532       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:41.526530       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:41.841869       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:45.420840       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:45.437601       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:45.449663       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:45.569142       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:45.634038       1 logging.go:55] [core] [Channel #199 SubChannel #200]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:45.664593       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:45.693700       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:04:45.701159       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [f644bafa71082b2f43c37e4984bcf95201973c749ec322d44cf504a64879cf1d] <==
	E1216 21:14:57.884078       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:14:57.985657       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:15:27.892057       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:15:27.996267       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:15:57.899189       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:15:58.004509       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1216 21:16:08.064844       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="191.571µs"
	I1216 21:16:21.062184       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="108.355µs"
	E1216 21:16:27.905826       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:16:28.015614       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:16:57.914659       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:16:58.024276       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:17:27.921553       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:17:28.032955       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:17:57.927827       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:17:58.044201       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1216 21:18:09.529300       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="no-preload-232338"
	E1216 21:18:27.933646       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:18:28.053009       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:18:57.945223       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:18:58.062872       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:19:27.953578       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:19:28.073493       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:19:57.960572       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:19:58.082108       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [9ca52a5e130b887ba85db3ae0ebc536eabd241b8b47bd574e2312e53de9ed7e6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1216 21:05:00.015519       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1216 21:05:00.047397       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.50.240"]
	E1216 21:05:00.047515       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 21:05:00.566393       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1216 21:05:00.566456       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1216 21:05:00.566483       1 server_linux.go:170] "Using iptables Proxier"
	I1216 21:05:00.602172       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 21:05:00.602632       1 server.go:497] "Version info" version="v1.32.0"
	I1216 21:05:00.602665       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 21:05:00.687521       1 config.go:199] "Starting service config controller"
	I1216 21:05:00.687668       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1216 21:05:00.687788       1 config.go:105] "Starting endpoint slice config controller"
	I1216 21:05:00.687896       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1216 21:05:00.695942       1 config.go:329] "Starting node config controller"
	I1216 21:05:00.696267       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1216 21:05:00.787875       1 shared_informer.go:320] Caches are synced for service config
	I1216 21:05:00.787960       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1216 21:05:00.812727       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d80b96bf35cfc3a12f52bfdb0f2e4eae378235fd658763bc0054a669a0e7919a] <==
	W1216 21:04:51.266841       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1216 21:04:51.266851       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 21:04:51.267455       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1216 21:04:51.267493       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 21:04:52.077506       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1216 21:04:52.077565       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 21:04:52.123175       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1216 21:04:52.123296       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1216 21:04:52.244516       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1216 21:04:52.244636       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 21:04:52.371942       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1216 21:04:52.372001       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 21:04:52.402837       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E1216 21:04:52.402897       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 21:04:52.404564       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1216 21:04:52.404622       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 21:04:52.525409       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1216 21:04:52.525485       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 21:04:52.580017       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1216 21:04:52.580079       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 21:04:52.605416       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1216 21:04:52.605452       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 21:04:52.625399       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1216 21:04:52.625623       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1216 21:04:55.252595       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 16 21:19:07 no-preload-232338 kubelet[3681]: E1216 21:19:07.043983    3681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-l7dcr" podUID="fabafb40-1cb8-427b-88a6-37eeb6fd5b77"
	Dec 16 21:19:14 no-preload-232338 kubelet[3681]: E1216 21:19:14.369263    3681 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383954369016267,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100999,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:19:14 no-preload-232338 kubelet[3681]: E1216 21:19:14.369286    3681 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383954369016267,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100999,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:19:19 no-preload-232338 kubelet[3681]: E1216 21:19:19.044231    3681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-l7dcr" podUID="fabafb40-1cb8-427b-88a6-37eeb6fd5b77"
	Dec 16 21:19:24 no-preload-232338 kubelet[3681]: E1216 21:19:24.370723    3681 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383964370221254,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100999,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:19:24 no-preload-232338 kubelet[3681]: E1216 21:19:24.371376    3681 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383964370221254,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100999,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:19:34 no-preload-232338 kubelet[3681]: E1216 21:19:34.044494    3681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-l7dcr" podUID="fabafb40-1cb8-427b-88a6-37eeb6fd5b77"
	Dec 16 21:19:34 no-preload-232338 kubelet[3681]: E1216 21:19:34.374041    3681 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383974373160471,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100999,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:19:34 no-preload-232338 kubelet[3681]: E1216 21:19:34.374171    3681 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383974373160471,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100999,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:19:44 no-preload-232338 kubelet[3681]: E1216 21:19:44.375897    3681 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383984375241070,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100999,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:19:44 no-preload-232338 kubelet[3681]: E1216 21:19:44.376289    3681 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383984375241070,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100999,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:19:47 no-preload-232338 kubelet[3681]: E1216 21:19:47.044429    3681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-l7dcr" podUID="fabafb40-1cb8-427b-88a6-37eeb6fd5b77"
	Dec 16 21:19:54 no-preload-232338 kubelet[3681]: E1216 21:19:54.079839    3681 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 16 21:19:54 no-preload-232338 kubelet[3681]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 16 21:19:54 no-preload-232338 kubelet[3681]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 16 21:19:54 no-preload-232338 kubelet[3681]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 16 21:19:54 no-preload-232338 kubelet[3681]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 16 21:19:54 no-preload-232338 kubelet[3681]: E1216 21:19:54.383101    3681 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383994382014649,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100999,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:19:54 no-preload-232338 kubelet[3681]: E1216 21:19:54.384423    3681 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383994382014649,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100999,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:20:01 no-preload-232338 kubelet[3681]: E1216 21:20:01.043872    3681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-l7dcr" podUID="fabafb40-1cb8-427b-88a6-37eeb6fd5b77"
	Dec 16 21:20:04 no-preload-232338 kubelet[3681]: E1216 21:20:04.386297    3681 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734384004385892863,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100999,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:20:04 no-preload-232338 kubelet[3681]: E1216 21:20:04.386672    3681 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734384004385892863,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100999,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:20:12 no-preload-232338 kubelet[3681]: E1216 21:20:12.044554    3681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-l7dcr" podUID="fabafb40-1cb8-427b-88a6-37eeb6fd5b77"
	Dec 16 21:20:14 no-preload-232338 kubelet[3681]: E1216 21:20:14.388811    3681 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734384014388019092,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100999,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:20:14 no-preload-232338 kubelet[3681]: E1216 21:20:14.388860    3681 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734384014388019092,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100999,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [dcd2618255da99a588a2bbff1366ef3ae7975c5b7427ce4189ef2c5fd444ba69] <==
	I1216 21:05:00.871168       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 21:05:00.899586       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 21:05:00.899632       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1216 21:05:00.926031       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 21:05:00.926216       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-232338_676ddeb4-4c29-4c65-b900-27842ee95fa7!
	I1216 21:05:00.928108       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4f00ba71-898c-4f68-a46e-15b5734a6f46", APIVersion:"v1", ResourceVersion:"391", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-232338_676ddeb4-4c29-4c65-b900-27842ee95fa7 became leader
	I1216 21:05:01.026539       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-232338_676ddeb4-4c29-4c65-b900-27842ee95fa7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-232338 -n no-preload-232338
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-232338 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-l7dcr
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-232338 describe pod metrics-server-f79f97bbb-l7dcr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-232338 describe pod metrics-server-f79f97bbb-l7dcr: exit status 1 (72.079506ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-l7dcr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-232338 describe pod metrics-server-f79f97bbb-l7dcr: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (364.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (356.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.151:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-606219 -n embed-certs-606219
start_stop_delete_test.go:285: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-12-16 21:20:30.165813406 +0000 UTC m=+6348.639461139
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-606219 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context embed-certs-606219 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.669µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-606219 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-606219 -n embed-certs-606219
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-606219 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-606219 logs -n 25: (1.423128491s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p                                                     | default-k8s-diff-port-327790 | jenkins | v1.34.0 | 16 Dec 24 20:52 UTC |                     |
	|         | default-k8s-diff-port-327790                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-847766        | old-k8s-version-847766       | jenkins | v1.34.0 | 16 Dec 24 20:53 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-606219                 | embed-certs-606219           | jenkins | v1.34.0 | 16 Dec 24 20:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-606219                                  | embed-certs-606219           | jenkins | v1.34.0 | 16 Dec 24 20:54 UTC | 16 Dec 24 21:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-232338                  | no-preload-232338            | jenkins | v1.34.0 | 16 Dec 24 20:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-232338                                   | no-preload-232338            | jenkins | v1.34.0 | 16 Dec 24 20:54 UTC | 16 Dec 24 21:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-327790       | default-k8s-diff-port-327790 | jenkins | v1.34.0 | 16 Dec 24 20:55 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-847766                              | old-k8s-version-847766       | jenkins | v1.34.0 | 16 Dec 24 20:55 UTC | 16 Dec 24 20:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-327790 | jenkins | v1.34.0 | 16 Dec 24 20:55 UTC | 16 Dec 24 21:04 UTC |
	|         | default-k8s-diff-port-327790                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-847766             | old-k8s-version-847766       | jenkins | v1.34.0 | 16 Dec 24 20:55 UTC | 16 Dec 24 20:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-847766                              | old-k8s-version-847766       | jenkins | v1.34.0 | 16 Dec 24 20:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-847766                              | old-k8s-version-847766       | jenkins | v1.34.0 | 16 Dec 24 21:18 UTC | 16 Dec 24 21:18 UTC |
	| start   | -p newest-cni-194530 --memory=2200 --alsologtostderr   | newest-cni-194530            | jenkins | v1.34.0 | 16 Dec 24 21:18 UTC | 16 Dec 24 21:19 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-194530             | newest-cni-194530            | jenkins | v1.34.0 | 16 Dec 24 21:19 UTC | 16 Dec 24 21:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-194530                                   | newest-cni-194530            | jenkins | v1.34.0 | 16 Dec 24 21:19 UTC | 16 Dec 24 21:19 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-194530                  | newest-cni-194530            | jenkins | v1.34.0 | 16 Dec 24 21:19 UTC | 16 Dec 24 21:19 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-194530 --memory=2200 --alsologtostderr   | newest-cni-194530            | jenkins | v1.34.0 | 16 Dec 24 21:19 UTC | 16 Dec 24 21:20 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-232338                                   | no-preload-232338            | jenkins | v1.34.0 | 16 Dec 24 21:20 UTC | 16 Dec 24 21:20 UTC |
	| start   | -p auto-647112 --memory=3072                           | auto-647112                  | jenkins | v1.34.0 | 16 Dec 24 21:20 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| image   | newest-cni-194530 image list                           | newest-cni-194530            | jenkins | v1.34.0 | 16 Dec 24 21:20 UTC | 16 Dec 24 21:20 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-194530                                   | newest-cni-194530            | jenkins | v1.34.0 | 16 Dec 24 21:20 UTC | 16 Dec 24 21:20 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-194530                                   | newest-cni-194530            | jenkins | v1.34.0 | 16 Dec 24 21:20 UTC | 16 Dec 24 21:20 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-194530                                   | newest-cni-194530            | jenkins | v1.34.0 | 16 Dec 24 21:20 UTC | 16 Dec 24 21:20 UTC |
	| delete  | -p newest-cni-194530                                   | newest-cni-194530            | jenkins | v1.34.0 | 16 Dec 24 21:20 UTC | 16 Dec 24 21:20 UTC |
	| start   | -p kindnet-647112                                      | kindnet-647112               | jenkins | v1.34.0 | 16 Dec 24 21:20 UTC |                     |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 21:20:24
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 21:20:24.948646   69158 out.go:345] Setting OutFile to fd 1 ...
	I1216 21:20:24.948794   69158 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 21:20:24.948805   69158 out.go:358] Setting ErrFile to fd 2...
	I1216 21:20:24.948812   69158 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 21:20:24.948997   69158 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-7083/.minikube/bin
	I1216 21:20:24.949611   69158 out.go:352] Setting JSON to false
	I1216 21:20:24.950729   69158 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7370,"bootTime":1734376655,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 21:20:24.950836   69158 start.go:139] virtualization: kvm guest
	I1216 21:20:24.953229   69158 out.go:177] * [kindnet-647112] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1216 21:20:24.954869   69158 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 21:20:24.954916   69158 notify.go:220] Checking for updates...
	I1216 21:20:24.957423   69158 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 21:20:24.958823   69158 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 21:20:24.960207   69158 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-7083/.minikube
	I1216 21:20:24.961507   69158 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 21:20:24.963027   69158 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 21:20:24.965055   69158 config.go:182] Loaded profile config "auto-647112": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 21:20:24.965202   69158 config.go:182] Loaded profile config "default-k8s-diff-port-327790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 21:20:24.965323   69158 config.go:182] Loaded profile config "embed-certs-606219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 21:20:24.965447   69158 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 21:20:25.004320   69158 out.go:177] * Using the kvm2 driver based on user configuration
	I1216 21:20:25.005866   69158 start.go:297] selected driver: kvm2
	I1216 21:20:25.005884   69158 start.go:901] validating driver "kvm2" against <nil>
	I1216 21:20:25.005905   69158 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 21:20:25.006681   69158 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 21:20:25.006753   69158 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20091-7083/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1216 21:20:25.023789   69158 install.go:137] /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1216 21:20:25.023840   69158 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 21:20:25.024094   69158 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 21:20:25.024124   69158 cni.go:84] Creating CNI manager for "kindnet"
	I1216 21:20:25.024128   69158 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 21:20:25.024179   69158 start.go:340] cluster config:
	{Name:kindnet-647112 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:kindnet-647112 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 21:20:25.024272   69158 iso.go:125] acquiring lock: {Name:mk60ed2ba7ed00047edacd09f4f6bf84214f0831 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 21:20:25.026443   69158 out.go:177] * Starting "kindnet-647112" primary control-plane node in "kindnet-647112" cluster
	I1216 21:20:23.756216   68629 main.go:141] libmachine: (auto-647112) DBG | domain auto-647112 has defined MAC address 52:54:00:6f:9d:44 in network mk-auto-647112
	I1216 21:20:23.756802   68629 main.go:141] libmachine: (auto-647112) DBG | unable to find current IP address of domain auto-647112 in network mk-auto-647112
	I1216 21:20:23.756832   68629 main.go:141] libmachine: (auto-647112) DBG | I1216 21:20:23.756762   68652 retry.go:31] will retry after 1.230420408s: waiting for machine to come up
	I1216 21:20:24.988452   68629 main.go:141] libmachine: (auto-647112) DBG | domain auto-647112 has defined MAC address 52:54:00:6f:9d:44 in network mk-auto-647112
	I1216 21:20:24.988865   68629 main.go:141] libmachine: (auto-647112) DBG | unable to find current IP address of domain auto-647112 in network mk-auto-647112
	I1216 21:20:24.988912   68629 main.go:141] libmachine: (auto-647112) DBG | I1216 21:20:24.988856   68652 retry.go:31] will retry after 1.640772872s: waiting for machine to come up
	I1216 21:20:26.631915   68629 main.go:141] libmachine: (auto-647112) DBG | domain auto-647112 has defined MAC address 52:54:00:6f:9d:44 in network mk-auto-647112
	I1216 21:20:26.632450   68629 main.go:141] libmachine: (auto-647112) DBG | unable to find current IP address of domain auto-647112 in network mk-auto-647112
	I1216 21:20:26.632482   68629 main.go:141] libmachine: (auto-647112) DBG | I1216 21:20:26.632402   68652 retry.go:31] will retry after 2.135371884s: waiting for machine to come up
	I1216 21:20:25.028180   69158 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 21:20:25.028239   69158 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1216 21:20:25.028253   69158 cache.go:56] Caching tarball of preloaded images
	I1216 21:20:25.028337   69158 preload.go:172] Found /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 21:20:25.028361   69158 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1216 21:20:25.028487   69158 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/kindnet-647112/config.json ...
	I1216 21:20:25.028513   69158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/kindnet-647112/config.json: {Name:mke33a9c43d665775a1c8301f4e268dfac8eb96c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:20:25.028683   69158 start.go:360] acquireMachinesLock for kindnet-647112: {Name:mk014ce1133f8d018fee1f78c9c31a354da6dd77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	
	
	==> CRI-O <==
	Dec 16 21:20:30 embed-certs-606219 crio[729]: time="2024-12-16 21:20:30.822706212Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734384030822680408,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6434d2ec-4266-4279-a718-cacdecf04d45 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:20:30 embed-certs-606219 crio[729]: time="2024-12-16 21:20:30.823412768Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d272c423-fd52-4ca6-b9eb-734872ebf40f name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:20:30 embed-certs-606219 crio[729]: time="2024-12-16 21:20:30.823467434Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d272c423-fd52-4ca6-b9eb-734872ebf40f name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:20:30 embed-certs-606219 crio[729]: time="2024-12-16 21:20:30.823702479Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f207b770a60c78f07c7d2caae42124dc7cb80a0f2c2c4d421803607465ed058c,PodSandboxId:f1532bf4e0fd1a6d5a9b45282f434801954d844e6d03b0c7eb98493e6c3ab1c8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734383126995102678,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6437bd61-690b-498d-b35c-e2ef4eb5be97,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e011e81807ce18c97ed73f8ae1e9c158bbad51f79df6c1bb7808de64827f86c,PodSandboxId:2b784c516581038f6ad9f2a5df073e00d80f8e09f756249656389e0f641db76f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383126870609242,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-xhdlz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1b5b585-f005-4885-9809-60f60e03bf04,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da4ed6ea7998ea937d7237a73d1d06d850a02996479db7043cc3186011d15164,PodSandboxId:6b2f68a619579c14faf033125a44d8023e136897c6a996a00b0eaf8ca14dc783,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383126652789811,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-5c74p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
f8e73b6-150f-47cc-9df9-dcf983e5bd6e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af285d29097840b5484fc635cea5ab9e9ffa5c72a4d6ad4cc8eec49901107aa8,PodSandboxId:3ec9e248d92f10aa127b20d4a9ea3a6aef804439831be5a7bb2e6672e84b6676,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt
:1734383126113562135,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-677x9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37810520-4f02-46c4-8eeb-6dc70c859e3e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d76c01ea6554ad7ca7460a3e0b52e675fccc595170707f82e376ab5b53a254d3,PodSandboxId:e3547ba0c045ed6fec7866b719a022bebc0360eaf1307e283f12d9b32813f4a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1734383115507571750
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-606219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 030d95567448159717b54757a6d98e97,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcfec28b8854e887e26ff5e13a923fb0c3fed8905ec63a220a57a76d0df19da2,PodSandboxId:6e976a36e979dfc8f9c5de9c51f86cc89bd0a1f15aeb530b3b7b24e387da4f8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1734383115561
461843,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-606219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b88848a2e234a69d007899565b5bbcce,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3a5c85fda02edeece59bcc01bb3489ae65a10b552e4e3b193037ba8d7a2cd2e,PodSandboxId:de7b88b663bb53fde5baa4a22e5958658645e9fc39a50c8952d0c2dfc640612e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1734383115511342492,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-606219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 528ee518c9057e66ed32f2256a823012,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7bce3fd7741fe3a1170af645de4eb2329619554377cffee90b06f6dbf85a52f,PodSandboxId:bf8a6fd807d690167ff5bf9d84883c3af170e1ea43fa641f6e17de763289daa2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1734383115405402764,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-606219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a485fe61bbc43636caa6b063150a4f07,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ea9639cafb04ef075e0b7522ec597b07b4878836f5fdb90e98b048758325993,PodSandboxId:24f5a79e440b43a9f6694acc89976a3171a8068fb9dcd9ea12c799754ee504b8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1734382826973437312,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-606219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b88848a2e234a69d007899565b5bbcce,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d272c423-fd52-4ca6-b9eb-734872ebf40f name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:20:30 embed-certs-606219 crio[729]: time="2024-12-16 21:20:30.872600614Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8ad5ef05-7520-419c-84d6-045787e5fcb4 name=/runtime.v1.RuntimeService/Version
	Dec 16 21:20:30 embed-certs-606219 crio[729]: time="2024-12-16 21:20:30.872893867Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8ad5ef05-7520-419c-84d6-045787e5fcb4 name=/runtime.v1.RuntimeService/Version
	Dec 16 21:20:30 embed-certs-606219 crio[729]: time="2024-12-16 21:20:30.874255138Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bd20227b-b9c9-496a-bfda-69f2dc2627c9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:20:30 embed-certs-606219 crio[729]: time="2024-12-16 21:20:30.874871300Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734384030874837601,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bd20227b-b9c9-496a-bfda-69f2dc2627c9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:20:30 embed-certs-606219 crio[729]: time="2024-12-16 21:20:30.875685030Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5c945898-e514-42f5-a40f-d0b71cfad300 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:20:30 embed-certs-606219 crio[729]: time="2024-12-16 21:20:30.875784724Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5c945898-e514-42f5-a40f-d0b71cfad300 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:20:30 embed-certs-606219 crio[729]: time="2024-12-16 21:20:30.876186179Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f207b770a60c78f07c7d2caae42124dc7cb80a0f2c2c4d421803607465ed058c,PodSandboxId:f1532bf4e0fd1a6d5a9b45282f434801954d844e6d03b0c7eb98493e6c3ab1c8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734383126995102678,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6437bd61-690b-498d-b35c-e2ef4eb5be97,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e011e81807ce18c97ed73f8ae1e9c158bbad51f79df6c1bb7808de64827f86c,PodSandboxId:2b784c516581038f6ad9f2a5df073e00d80f8e09f756249656389e0f641db76f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383126870609242,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-xhdlz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1b5b585-f005-4885-9809-60f60e03bf04,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da4ed6ea7998ea937d7237a73d1d06d850a02996479db7043cc3186011d15164,PodSandboxId:6b2f68a619579c14faf033125a44d8023e136897c6a996a00b0eaf8ca14dc783,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383126652789811,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-5c74p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
f8e73b6-150f-47cc-9df9-dcf983e5bd6e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af285d29097840b5484fc635cea5ab9e9ffa5c72a4d6ad4cc8eec49901107aa8,PodSandboxId:3ec9e248d92f10aa127b20d4a9ea3a6aef804439831be5a7bb2e6672e84b6676,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt
:1734383126113562135,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-677x9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37810520-4f02-46c4-8eeb-6dc70c859e3e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d76c01ea6554ad7ca7460a3e0b52e675fccc595170707f82e376ab5b53a254d3,PodSandboxId:e3547ba0c045ed6fec7866b719a022bebc0360eaf1307e283f12d9b32813f4a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1734383115507571750
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-606219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 030d95567448159717b54757a6d98e97,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcfec28b8854e887e26ff5e13a923fb0c3fed8905ec63a220a57a76d0df19da2,PodSandboxId:6e976a36e979dfc8f9c5de9c51f86cc89bd0a1f15aeb530b3b7b24e387da4f8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1734383115561
461843,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-606219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b88848a2e234a69d007899565b5bbcce,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3a5c85fda02edeece59bcc01bb3489ae65a10b552e4e3b193037ba8d7a2cd2e,PodSandboxId:de7b88b663bb53fde5baa4a22e5958658645e9fc39a50c8952d0c2dfc640612e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1734383115511342492,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-606219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 528ee518c9057e66ed32f2256a823012,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7bce3fd7741fe3a1170af645de4eb2329619554377cffee90b06f6dbf85a52f,PodSandboxId:bf8a6fd807d690167ff5bf9d84883c3af170e1ea43fa641f6e17de763289daa2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1734383115405402764,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-606219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a485fe61bbc43636caa6b063150a4f07,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ea9639cafb04ef075e0b7522ec597b07b4878836f5fdb90e98b048758325993,PodSandboxId:24f5a79e440b43a9f6694acc89976a3171a8068fb9dcd9ea12c799754ee504b8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1734382826973437312,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-606219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b88848a2e234a69d007899565b5bbcce,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5c945898-e514-42f5-a40f-d0b71cfad300 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:20:30 embed-certs-606219 crio[729]: time="2024-12-16 21:20:30.917962184Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c3df5c23-9ae5-4fdb-8296-f5bcd6521a78 name=/runtime.v1.RuntimeService/Version
	Dec 16 21:20:30 embed-certs-606219 crio[729]: time="2024-12-16 21:20:30.918059385Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c3df5c23-9ae5-4fdb-8296-f5bcd6521a78 name=/runtime.v1.RuntimeService/Version
	Dec 16 21:20:30 embed-certs-606219 crio[729]: time="2024-12-16 21:20:30.919056919Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3935b676-d539-4632-9648-09ccdeadd12f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:20:30 embed-certs-606219 crio[729]: time="2024-12-16 21:20:30.919511865Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734384030919475331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3935b676-d539-4632-9648-09ccdeadd12f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:20:30 embed-certs-606219 crio[729]: time="2024-12-16 21:20:30.920257680Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a3f1dfbf-a7e7-4fad-be78-20628be52eee name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:20:30 embed-certs-606219 crio[729]: time="2024-12-16 21:20:30.920324438Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a3f1dfbf-a7e7-4fad-be78-20628be52eee name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:20:30 embed-certs-606219 crio[729]: time="2024-12-16 21:20:30.920522544Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f207b770a60c78f07c7d2caae42124dc7cb80a0f2c2c4d421803607465ed058c,PodSandboxId:f1532bf4e0fd1a6d5a9b45282f434801954d844e6d03b0c7eb98493e6c3ab1c8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734383126995102678,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6437bd61-690b-498d-b35c-e2ef4eb5be97,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e011e81807ce18c97ed73f8ae1e9c158bbad51f79df6c1bb7808de64827f86c,PodSandboxId:2b784c516581038f6ad9f2a5df073e00d80f8e09f756249656389e0f641db76f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383126870609242,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-xhdlz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1b5b585-f005-4885-9809-60f60e03bf04,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da4ed6ea7998ea937d7237a73d1d06d850a02996479db7043cc3186011d15164,PodSandboxId:6b2f68a619579c14faf033125a44d8023e136897c6a996a00b0eaf8ca14dc783,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383126652789811,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-5c74p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
f8e73b6-150f-47cc-9df9-dcf983e5bd6e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af285d29097840b5484fc635cea5ab9e9ffa5c72a4d6ad4cc8eec49901107aa8,PodSandboxId:3ec9e248d92f10aa127b20d4a9ea3a6aef804439831be5a7bb2e6672e84b6676,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt
:1734383126113562135,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-677x9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37810520-4f02-46c4-8eeb-6dc70c859e3e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d76c01ea6554ad7ca7460a3e0b52e675fccc595170707f82e376ab5b53a254d3,PodSandboxId:e3547ba0c045ed6fec7866b719a022bebc0360eaf1307e283f12d9b32813f4a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1734383115507571750
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-606219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 030d95567448159717b54757a6d98e97,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcfec28b8854e887e26ff5e13a923fb0c3fed8905ec63a220a57a76d0df19da2,PodSandboxId:6e976a36e979dfc8f9c5de9c51f86cc89bd0a1f15aeb530b3b7b24e387da4f8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1734383115561
461843,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-606219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b88848a2e234a69d007899565b5bbcce,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3a5c85fda02edeece59bcc01bb3489ae65a10b552e4e3b193037ba8d7a2cd2e,PodSandboxId:de7b88b663bb53fde5baa4a22e5958658645e9fc39a50c8952d0c2dfc640612e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1734383115511342492,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-606219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 528ee518c9057e66ed32f2256a823012,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7bce3fd7741fe3a1170af645de4eb2329619554377cffee90b06f6dbf85a52f,PodSandboxId:bf8a6fd807d690167ff5bf9d84883c3af170e1ea43fa641f6e17de763289daa2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1734383115405402764,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-606219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a485fe61bbc43636caa6b063150a4f07,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ea9639cafb04ef075e0b7522ec597b07b4878836f5fdb90e98b048758325993,PodSandboxId:24f5a79e440b43a9f6694acc89976a3171a8068fb9dcd9ea12c799754ee504b8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1734382826973437312,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-606219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b88848a2e234a69d007899565b5bbcce,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a3f1dfbf-a7e7-4fad-be78-20628be52eee name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:20:30 embed-certs-606219 crio[729]: time="2024-12-16 21:20:30.967053854Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c36c1bcd-7a24-4340-836a-5c4f45724516 name=/runtime.v1.RuntimeService/Version
	Dec 16 21:20:30 embed-certs-606219 crio[729]: time="2024-12-16 21:20:30.967244662Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c36c1bcd-7a24-4340-836a-5c4f45724516 name=/runtime.v1.RuntimeService/Version
	Dec 16 21:20:30 embed-certs-606219 crio[729]: time="2024-12-16 21:20:30.968593600Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=832fc351-7af4-4783-98a5-113fc4ec8736 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:20:30 embed-certs-606219 crio[729]: time="2024-12-16 21:20:30.969260431Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734384030969223814,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=832fc351-7af4-4783-98a5-113fc4ec8736 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:20:30 embed-certs-606219 crio[729]: time="2024-12-16 21:20:30.969978200Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0f58b088-d1bf-4303-bd93-e5c4537eebf9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:20:30 embed-certs-606219 crio[729]: time="2024-12-16 21:20:30.970052649Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0f58b088-d1bf-4303-bd93-e5c4537eebf9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:20:30 embed-certs-606219 crio[729]: time="2024-12-16 21:20:30.970455088Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f207b770a60c78f07c7d2caae42124dc7cb80a0f2c2c4d421803607465ed058c,PodSandboxId:f1532bf4e0fd1a6d5a9b45282f434801954d844e6d03b0c7eb98493e6c3ab1c8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734383126995102678,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6437bd61-690b-498d-b35c-e2ef4eb5be97,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e011e81807ce18c97ed73f8ae1e9c158bbad51f79df6c1bb7808de64827f86c,PodSandboxId:2b784c516581038f6ad9f2a5df073e00d80f8e09f756249656389e0f641db76f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383126870609242,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-xhdlz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1b5b585-f005-4885-9809-60f60e03bf04,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da4ed6ea7998ea937d7237a73d1d06d850a02996479db7043cc3186011d15164,PodSandboxId:6b2f68a619579c14faf033125a44d8023e136897c6a996a00b0eaf8ca14dc783,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734383126652789811,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-5c74p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
f8e73b6-150f-47cc-9df9-dcf983e5bd6e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af285d29097840b5484fc635cea5ab9e9ffa5c72a4d6ad4cc8eec49901107aa8,PodSandboxId:3ec9e248d92f10aa127b20d4a9ea3a6aef804439831be5a7bb2e6672e84b6676,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt
:1734383126113562135,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-677x9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37810520-4f02-46c4-8eeb-6dc70c859e3e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d76c01ea6554ad7ca7460a3e0b52e675fccc595170707f82e376ab5b53a254d3,PodSandboxId:e3547ba0c045ed6fec7866b719a022bebc0360eaf1307e283f12d9b32813f4a5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1734383115507571750
,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-606219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 030d95567448159717b54757a6d98e97,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcfec28b8854e887e26ff5e13a923fb0c3fed8905ec63a220a57a76d0df19da2,PodSandboxId:6e976a36e979dfc8f9c5de9c51f86cc89bd0a1f15aeb530b3b7b24e387da4f8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1734383115561
461843,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-606219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b88848a2e234a69d007899565b5bbcce,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3a5c85fda02edeece59bcc01bb3489ae65a10b552e4e3b193037ba8d7a2cd2e,PodSandboxId:de7b88b663bb53fde5baa4a22e5958658645e9fc39a50c8952d0c2dfc640612e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1734383115511342492,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-606219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 528ee518c9057e66ed32f2256a823012,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7bce3fd7741fe3a1170af645de4eb2329619554377cffee90b06f6dbf85a52f,PodSandboxId:bf8a6fd807d690167ff5bf9d84883c3af170e1ea43fa641f6e17de763289daa2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1734383115405402764,Labels:map[string]string{io
.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-606219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a485fe61bbc43636caa6b063150a4f07,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ea9639cafb04ef075e0b7522ec597b07b4878836f5fdb90e98b048758325993,PodSandboxId:24f5a79e440b43a9f6694acc89976a3171a8068fb9dcd9ea12c799754ee504b8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_EXITED,CreatedAt:1734382826973437312,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-606219,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b88848a2e234a69d007899565b5bbcce,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0f58b088-d1bf-4303-bd93-e5c4537eebf9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f207b770a60c7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   f1532bf4e0fd1       storage-provisioner
	1e011e81807ce       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   15 minutes ago      Running             coredns                   0                   2b784c5165810       coredns-668d6bf9bc-xhdlz
	da4ed6ea7998e       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   15 minutes ago      Running             coredns                   0                   6b2f68a619579       coredns-668d6bf9bc-5c74p
	af285d2909784       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   15 minutes ago      Running             kube-proxy                0                   3ec9e248d92f1       kube-proxy-677x9
	bcfec28b8854e       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   15 minutes ago      Running             kube-apiserver            2                   6e976a36e979d       kube-apiserver-embed-certs-606219
	b3a5c85fda02e       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   15 minutes ago      Running             kube-scheduler            2                   de7b88b663bb5       kube-scheduler-embed-certs-606219
	d76c01ea6554a       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   15 minutes ago      Running             kube-controller-manager   2                   e3547ba0c045e       kube-controller-manager-embed-certs-606219
	e7bce3fd7741f       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   15 minutes ago      Running             etcd                      2                   bf8a6fd807d69       etcd-embed-certs-606219
	4ea9639cafb04       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   20 minutes ago      Exited              kube-apiserver            1                   24f5a79e440b4       kube-apiserver-embed-certs-606219
	
	
	==> coredns [1e011e81807ce18c97ed73f8ae1e9c158bbad51f79df6c1bb7808de64827f86c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [da4ed6ea7998ea937d7237a73d1d06d850a02996479db7043cc3186011d15164] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-606219
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-606219
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=74e51ab701402ddc00f8ba70f2a2775c7dcd6477
	                    minikube.k8s.io/name=embed-certs-606219
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_16T21_05_21_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Dec 2024 21:05:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-606219
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Dec 2024 21:20:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Dec 2024 21:17:05 +0000   Mon, 16 Dec 2024 21:05:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Dec 2024 21:17:05 +0000   Mon, 16 Dec 2024 21:05:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Dec 2024 21:17:05 +0000   Mon, 16 Dec 2024 21:05:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Dec 2024 21:17:05 +0000   Mon, 16 Dec 2024 21:05:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.151
	  Hostname:    embed-certs-606219
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 03dc9006e7ea4386a7cd370dbe27528e
	  System UUID:                03dc9006-e7ea-4386-a7cd-370dbe27528e
	  Boot ID:                    eab235e9-606a-4e10-b523-f7e56ad03e67
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-5c74p                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 coredns-668d6bf9bc-xhdlz                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 etcd-embed-certs-606219                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kube-apiserver-embed-certs-606219             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-embed-certs-606219    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-677x9                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-embed-certs-606219             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 metrics-server-f79f97bbb-6fxnl                100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         15m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node embed-certs-606219 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node embed-certs-606219 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node embed-certs-606219 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m   node-controller  Node embed-certs-606219 event: Registered Node embed-certs-606219 in Controller
	
	
	==> dmesg <==
	[  +0.055356] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.050254] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.707115] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.117016] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.561897] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.244326] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.063876] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065176] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.179422] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.169466] systemd-fstab-generator[686]: Ignoring "noauto" option for root device
	[  +0.314139] systemd-fstab-generator[719]: Ignoring "noauto" option for root device
	[  +4.632743] systemd-fstab-generator[813]: Ignoring "noauto" option for root device
	[  +0.066196] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.139055] systemd-fstab-generator[937]: Ignoring "noauto" option for root device
	[  +5.617115] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.047675] kauditd_printk_skb: 85 callbacks suppressed
	[Dec16 21:05] systemd-fstab-generator[2688]: Ignoring "noauto" option for root device
	[  +0.074849] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.006021] systemd-fstab-generator[3028]: Ignoring "noauto" option for root device
	[  +0.073632] kauditd_printk_skb: 54 callbacks suppressed
	[  +4.377690] systemd-fstab-generator[3149]: Ignoring "noauto" option for root device
	[  +1.061221] kauditd_printk_skb: 34 callbacks suppressed
	[  +6.581801] kauditd_printk_skb: 62 callbacks suppressed
	
	
	==> etcd [e7bce3fd7741fe3a1170af645de4eb2329619554377cffee90b06f6dbf85a52f] <==
	{"level":"info","ts":"2024-12-16T21:05:26.524190Z","caller":"traceutil/trace.go:171","msg":"trace[424445992] linearizableReadLoop","detail":"{readStateIndex:375; appliedIndex:373; }","duration":"108.374959ms","start":"2024-12-16T21:05:26.415367Z","end":"2024-12-16T21:05:26.523742Z","steps":["trace[424445992] 'read index received'  (duration: 10.832877ms)","trace[424445992] 'applied index is now lower than readState.Index'  (duration: 97.54135ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-16T21:05:26.525378Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.945969ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/view\" limit:1 ","response":"range_response_count:1 size:2157"}
	{"level":"info","ts":"2024-12-16T21:05:26.525412Z","caller":"traceutil/trace.go:171","msg":"trace[856640821] range","detail":"{range_begin:/registry/clusterroles/view; range_end:; response_count:1; response_revision:366; }","duration":"110.055635ms","start":"2024-12-16T21:05:26.415347Z","end":"2024-12-16T21:05:26.525402Z","steps":["trace[856640821] 'agreement among raft nodes before linearized reading'  (duration: 109.930143ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T21:05:26.531944Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.382844ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/metrics-server:system:auth-delegator\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-16T21:05:26.532003Z","caller":"traceutil/trace.go:171","msg":"trace[1123035585] range","detail":"{range_begin:/registry/clusterrolebindings/metrics-server:system:auth-delegator; range_end:; response_count:0; response_revision:367; }","duration":"112.47339ms","start":"2024-12-16T21:05:26.419519Z","end":"2024-12-16T21:05:26.531993Z","steps":["trace[1123035585] 'agreement among raft nodes before linearized reading'  (duration: 112.338371ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-16T21:15:16.520409Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":708}
	{"level":"info","ts":"2024-12-16T21:15:16.530431Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":708,"took":"9.585666ms","hash":547536896,"current-db-size-bytes":2367488,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2367488,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-12-16T21:15:16.530476Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":547536896,"revision":708,"compact-revision":-1}
	{"level":"info","ts":"2024-12-16T21:19:18.122327Z","caller":"traceutil/trace.go:171","msg":"trace[1999043595] transaction","detail":"{read_only:false; response_revision:1152; number_of_response:1; }","duration":"124.914467ms","start":"2024-12-16T21:19:17.997371Z","end":"2024-12-16T21:19:18.122285Z","steps":["trace[1999043595] 'process raft request'  (duration: 124.801496ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T21:20:10.155824Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"251.005381ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14244203723543656828 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.61.151\" mod_revision:1185 > success:<request_put:<key:\"/registry/masterleases/192.168.61.151\" value_size:67 lease:5020831686688881018 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.151\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-12-16T21:20:10.155963Z","caller":"traceutil/trace.go:171","msg":"trace[834116277] linearizableReadLoop","detail":"{readStateIndex:1386; appliedIndex:1385; }","duration":"366.89973ms","start":"2024-12-16T21:20:09.789042Z","end":"2024-12-16T21:20:10.155942Z","steps":["trace[834116277] 'read index received'  (duration: 114.612653ms)","trace[834116277] 'applied index is now lower than readState.Index'  (duration: 252.285709ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-16T21:20:10.156053Z","caller":"traceutil/trace.go:171","msg":"trace[1523448610] transaction","detail":"{read_only:false; response_revision:1193; number_of_response:1; }","duration":"378.80728ms","start":"2024-12-16T21:20:09.777236Z","end":"2024-12-16T21:20:10.156043Z","steps":["trace[1523448610] 'process raft request'  (duration: 126.462373ms)","trace[1523448610] 'compare'  (duration: 250.857072ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-16T21:20:10.156232Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-16T21:20:09.777209Z","time spent":"378.87415ms","remote":"127.0.0.1:39158","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":120,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.61.151\" mod_revision:1185 > success:<request_put:<key:\"/registry/masterleases/192.168.61.151\" value_size:67 lease:5020831686688881018 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.151\" > >"}
	{"level":"warn","ts":"2024-12-16T21:20:10.156417Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"264.164532ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-16T21:20:10.157352Z","caller":"traceutil/trace.go:171","msg":"trace[1810184238] range","detail":"{range_begin:/registry/cronjobs/; range_end:/registry/cronjobs0; response_count:0; response_revision:1193; }","duration":"265.106324ms","start":"2024-12-16T21:20:09.892178Z","end":"2024-12-16T21:20:10.157284Z","steps":["trace[1810184238] 'agreement among raft nodes before linearized reading'  (duration: 264.154625ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T21:20:10.156597Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"367.567405ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-16T21:20:10.157581Z","caller":"traceutil/trace.go:171","msg":"trace[41074781] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1193; }","duration":"368.566105ms","start":"2024-12-16T21:20:09.789001Z","end":"2024-12-16T21:20:10.157567Z","steps":["trace[41074781] 'agreement among raft nodes before linearized reading'  (duration: 367.561591ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T21:20:10.157635Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-16T21:20:09.788985Z","time spent":"368.630654ms","remote":"127.0.0.1:39282","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-12-16T21:20:10.834441Z","caller":"traceutil/trace.go:171","msg":"trace[1751218117] transaction","detail":"{read_only:false; response_revision:1194; number_of_response:1; }","duration":"396.502659ms","start":"2024-12-16T21:20:10.437921Z","end":"2024-12-16T21:20:10.834424Z","steps":["trace[1751218117] 'process raft request'  (duration: 396.080239ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T21:20:10.834648Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-16T21:20:10.437897Z","time spent":"396.654276ms","remote":"127.0.0.1:39268","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1192 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-12-16T21:20:11.079738Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"286.666587ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/\" range_end:\"/registry/apiregistration.k8s.io/apiservices0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-12-16T21:20:11.079801Z","caller":"traceutil/trace.go:171","msg":"trace[1856172327] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/; range_end:/registry/apiregistration.k8s.io/apiservices0; response_count:0; response_revision:1194; }","duration":"286.776066ms","start":"2024-12-16T21:20:10.793008Z","end":"2024-12-16T21:20:11.079784Z","steps":["trace[1856172327] 'agreement among raft nodes before linearized reading'  (duration: 41.391565ms)","trace[1856172327] 'count revisions from in-memory index tree'  (duration: 245.277179ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-16T21:20:16.531654Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":953}
	{"level":"info","ts":"2024-12-16T21:20:16.536214Z","caller":"mvcc/kvstore_compaction.go:72","msg":"finished scheduled compaction","compact-revision":953,"took":"4.214684ms","hash":2728258476,"current-db-size-bytes":2367488,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1634304,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-12-16T21:20:16.536275Z","caller":"mvcc/hash.go:151","msg":"storing new hash","hash":2728258476,"revision":953,"compact-revision":708}
	
	
	==> kernel <==
	 21:20:31 up 20 min,  0 users,  load average: 0.45, 0.25, 0.20
	Linux embed-certs-606219 5.10.207 #1 SMP Thu Dec 12 23:38:00 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4ea9639cafb04ef075e0b7522ec597b07b4878836f5fdb90e98b048758325993] <==
	W1216 21:05:07.685577       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:07.876405       1 logging.go:55] [core] [Channel #18 SubChannel #19]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:07.893600       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:08.174204       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:11.227413       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:11.662920       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:11.947981       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:11.958953       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:12.227327       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:12.393366       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:12.408249       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:12.426768       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:12.496363       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:12.516644       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:12.532288       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:12.545081       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:12.679580       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:12.686241       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:12.719670       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:12.721098       1 logging.go:55] [core] [Channel #9 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:12.759430       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:12.796210       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:12.808210       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:12.975388       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1216 21:05:12.975412       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [bcfec28b8854e887e26ff5e13a923fb0c3fed8905ec63a220a57a76d0df19da2] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1216 21:18:18.921928       1 handler_proxy.go:99] no RequestInfo found in the context
	E1216 21:18:18.922441       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1216 21:18:18.923610       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1216 21:18:18.923663       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1216 21:20:17.923310       1 handler_proxy.go:99] no RequestInfo found in the context
	E1216 21:20:17.924055       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1216 21:20:18.926041       1 handler_proxy.go:99] no RequestInfo found in the context
	E1216 21:20:18.926222       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1216 21:20:18.926271       1 handler_proxy.go:99] no RequestInfo found in the context
	E1216 21:20:18.926318       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1216 21:20:18.927494       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1216 21:20:18.927559       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1216 21:20:29.788432       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1216 21:20:29.790241       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1216 21:20:29.791593       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1216 21:20:29.792990       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="5.216754ms" method="GET" path="/api/v1/namespaces/kubernetes-dashboard/pods" result=null
	
	
	==> kube-controller-manager [d76c01ea6554ad7ca7460a3e0b52e675fccc595170707f82e376ab5b53a254d3] <==
	E1216 21:15:24.579021       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:15:24.628222       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:15:54.585785       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:15:54.635627       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:16:24.593630       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:16:24.644052       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1216 21:16:30.782859       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="317.689µs"
	I1216 21:16:42.779890       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-f79f97bbb" duration="53.28µs"
	E1216 21:16:54.601728       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:16:54.654775       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1216 21:17:05.343984       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="embed-certs-606219"
	E1216 21:17:24.608242       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:17:24.663052       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:17:54.615846       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:17:54.672767       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:18:24.623090       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:18:24.683027       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:18:54.629369       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:18:54.693024       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:19:24.637596       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:19:24.701976       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:19:54.645191       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:19:54.711266       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1216 21:20:24.652865       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1216 21:20:24.720878       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [af285d29097840b5484fc635cea5ab9e9ffa5c72a4d6ad4cc8eec49901107aa8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1216 21:05:27.385466       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1216 21:05:27.398312       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.61.151"]
	E1216 21:05:27.398631       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 21:05:27.448405       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1216 21:05:27.448472       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1216 21:05:27.448505       1 server_linux.go:170] "Using iptables Proxier"
	I1216 21:05:27.451919       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 21:05:27.452359       1 server.go:497] "Version info" version="v1.32.0"
	I1216 21:05:27.452389       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 21:05:27.454751       1 config.go:199] "Starting service config controller"
	I1216 21:05:27.454797       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1216 21:05:27.454820       1 config.go:105] "Starting endpoint slice config controller"
	I1216 21:05:27.454824       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1216 21:05:27.455585       1 config.go:329] "Starting node config controller"
	I1216 21:05:27.455615       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1216 21:05:27.554963       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1216 21:05:27.555071       1 shared_informer.go:320] Caches are synced for service config
	I1216 21:05:27.556020       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b3a5c85fda02edeece59bcc01bb3489ae65a10b552e4e3b193037ba8d7a2cd2e] <==
	W1216 21:05:18.841033       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1216 21:05:18.841579       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 21:05:18.846825       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1216 21:05:18.846898       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 21:05:18.866015       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1216 21:05:18.866071       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 21:05:18.868512       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1216 21:05:18.869510       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 21:05:18.891764       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1216 21:05:18.892244       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 21:05:18.936700       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1216 21:05:18.936802       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1216 21:05:19.016309       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1216 21:05:19.016412       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 21:05:19.019988       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1216 21:05:19.020053       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1216 21:05:19.082030       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1216 21:05:19.082084       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 21:05:19.136485       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1216 21:05:19.136583       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 21:05:19.188694       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1216 21:05:19.188895       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 21:05:19.219966       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1216 21:05:19.220097       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1216 21:05:22.434428       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 16 21:19:21 embed-certs-606219 kubelet[3035]: E1216 21:19:21.070621    3035 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383961069657808,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:19:31 embed-certs-606219 kubelet[3035]: E1216 21:19:31.075522    3035 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383971073838929,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:19:31 embed-certs-606219 kubelet[3035]: E1216 21:19:31.075864    3035 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383971073838929,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:19:33 embed-certs-606219 kubelet[3035]: E1216 21:19:33.762396    3035 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-6fxnl" podUID="828f2925-402c-4f49-89e1-354e082c0de4"
	Dec 16 21:19:41 embed-certs-606219 kubelet[3035]: E1216 21:19:41.078394    3035 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383981077757046,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:19:41 embed-certs-606219 kubelet[3035]: E1216 21:19:41.078896    3035 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383981077757046,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:19:44 embed-certs-606219 kubelet[3035]: E1216 21:19:44.763467    3035 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-6fxnl" podUID="828f2925-402c-4f49-89e1-354e082c0de4"
	Dec 16 21:19:51 embed-certs-606219 kubelet[3035]: E1216 21:19:51.081280    3035 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383991080765588,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:19:51 embed-certs-606219 kubelet[3035]: E1216 21:19:51.081349    3035 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383991080765588,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:19:58 embed-certs-606219 kubelet[3035]: E1216 21:19:58.762502    3035 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-6fxnl" podUID="828f2925-402c-4f49-89e1-354e082c0de4"
	Dec 16 21:20:01 embed-certs-606219 kubelet[3035]: E1216 21:20:01.083825    3035 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734384001083390793,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:20:01 embed-certs-606219 kubelet[3035]: E1216 21:20:01.084244    3035 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734384001083390793,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:20:09 embed-certs-606219 kubelet[3035]: E1216 21:20:09.762774    3035 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-6fxnl" podUID="828f2925-402c-4f49-89e1-354e082c0de4"
	Dec 16 21:20:11 embed-certs-606219 kubelet[3035]: E1216 21:20:11.087187    3035 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734384011086289393,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:20:11 embed-certs-606219 kubelet[3035]: E1216 21:20:11.087252    3035 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734384011086289393,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:20:20 embed-certs-606219 kubelet[3035]: E1216 21:20:20.811319    3035 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 16 21:20:20 embed-certs-606219 kubelet[3035]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 16 21:20:20 embed-certs-606219 kubelet[3035]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 16 21:20:20 embed-certs-606219 kubelet[3035]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 16 21:20:20 embed-certs-606219 kubelet[3035]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 16 21:20:21 embed-certs-606219 kubelet[3035]: E1216 21:20:21.089758    3035 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734384021089047537,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:20:21 embed-certs-606219 kubelet[3035]: E1216 21:20:21.089921    3035 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734384021089047537,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:20:22 embed-certs-606219 kubelet[3035]: E1216 21:20:22.762887    3035 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-f79f97bbb-6fxnl" podUID="828f2925-402c-4f49-89e1-354e082c0de4"
	Dec 16 21:20:31 embed-certs-606219 kubelet[3035]: E1216 21:20:31.092339    3035 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734384031091779831,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 21:20:31 embed-certs-606219 kubelet[3035]: E1216 21:20:31.092372    3035 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734384031091779831,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134613,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [f207b770a60c78f07c7d2caae42124dc7cb80a0f2c2c4d421803607465ed058c] <==
	I1216 21:05:27.318271       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 21:05:27.351928       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 21:05:27.352251       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1216 21:05:27.368455       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 21:05:27.369185       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ce61853c-bbb0-4582-9389-51e55aaa1cf4", APIVersion:"v1", ResourceVersion:"394", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-606219_226d71a5-5f7f-477e-8a29-66b3064d5f06 became leader
	I1216 21:05:27.369267       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-606219_226d71a5-5f7f-477e-8a29-66b3064d5f06!
	I1216 21:05:27.469430       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-606219_226d71a5-5f7f-477e-8a29-66b3064d5f06!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-606219 -n embed-certs-606219
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-606219 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-f79f97bbb-6fxnl
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-606219 describe pod metrics-server-f79f97bbb-6fxnl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-606219 describe pod metrics-server-f79f97bbb-6fxnl: exit status 1 (63.376608ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-f79f97bbb-6fxnl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-606219 describe pod metrics-server-f79f97bbb-6fxnl: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (356.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (93.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
E1216 21:17:13.884695   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.240:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.240:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-847766 -n old-k8s-version-847766
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-847766 -n old-k8s-version-847766: exit status 2 (237.372767ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "old-k8s-version-847766" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-847766 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-847766 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.747µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-847766 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-847766 -n old-k8s-version-847766
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-847766 -n old-k8s-version-847766: exit status 2 (239.625524ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-847766 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-847766 logs -n 25: (1.601014106s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p stopped-upgrade-976873                              | stopped-upgrade-976873       | jenkins | v1.34.0 | 16 Dec 24 20:49 UTC | 16 Dec 24 20:50 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-560677                           | kubernetes-upgrade-560677    | jenkins | v1.34.0 | 16 Dec 24 20:50 UTC | 16 Dec 24 20:50 UTC |
	| start   | -p no-preload-232338                                   | no-preload-232338            | jenkins | v1.34.0 | 16 Dec 24 20:50 UTC | 16 Dec 24 20:52 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-976873                              | stopped-upgrade-976873       | jenkins | v1.34.0 | 16 Dec 24 20:50 UTC | 16 Dec 24 20:50 UTC |
	| start   | -p embed-certs-606219                                  | embed-certs-606219           | jenkins | v1.34.0 | 16 Dec 24 20:50 UTC | 16 Dec 24 20:51 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-270954                              | cert-expiration-270954       | jenkins | v1.34.0 | 16 Dec 24 20:51 UTC | 16 Dec 24 20:51 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-606219            | embed-certs-606219           | jenkins | v1.34.0 | 16 Dec 24 20:51 UTC | 16 Dec 24 20:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-606219                                  | embed-certs-606219           | jenkins | v1.34.0 | 16 Dec 24 20:51 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-270954                              | cert-expiration-270954       | jenkins | v1.34.0 | 16 Dec 24 20:51 UTC | 16 Dec 24 20:51 UTC |
	| delete  | -p                                                     | disable-driver-mounts-384008 | jenkins | v1.34.0 | 16 Dec 24 20:51 UTC | 16 Dec 24 20:51 UTC |
	|         | disable-driver-mounts-384008                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-327790 | jenkins | v1.34.0 | 16 Dec 24 20:51 UTC | 16 Dec 24 20:52 UTC |
	|         | default-k8s-diff-port-327790                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-232338             | no-preload-232338            | jenkins | v1.34.0 | 16 Dec 24 20:52 UTC | 16 Dec 24 20:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-232338                                   | no-preload-232338            | jenkins | v1.34.0 | 16 Dec 24 20:52 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-327790  | default-k8s-diff-port-327790 | jenkins | v1.34.0 | 16 Dec 24 20:52 UTC | 16 Dec 24 20:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-327790 | jenkins | v1.34.0 | 16 Dec 24 20:52 UTC |                     |
	|         | default-k8s-diff-port-327790                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-847766        | old-k8s-version-847766       | jenkins | v1.34.0 | 16 Dec 24 20:53 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-606219                 | embed-certs-606219           | jenkins | v1.34.0 | 16 Dec 24 20:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-606219                                  | embed-certs-606219           | jenkins | v1.34.0 | 16 Dec 24 20:54 UTC | 16 Dec 24 21:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-232338                  | no-preload-232338            | jenkins | v1.34.0 | 16 Dec 24 20:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-232338                                   | no-preload-232338            | jenkins | v1.34.0 | 16 Dec 24 20:54 UTC | 16 Dec 24 21:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-327790       | default-k8s-diff-port-327790 | jenkins | v1.34.0 | 16 Dec 24 20:55 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-847766                              | old-k8s-version-847766       | jenkins | v1.34.0 | 16 Dec 24 20:55 UTC | 16 Dec 24 20:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-327790 | jenkins | v1.34.0 | 16 Dec 24 20:55 UTC | 16 Dec 24 21:04 UTC |
	|         | default-k8s-diff-port-327790                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-847766             | old-k8s-version-847766       | jenkins | v1.34.0 | 16 Dec 24 20:55 UTC | 16 Dec 24 20:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-847766                              | old-k8s-version-847766       | jenkins | v1.34.0 | 16 Dec 24 20:55 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 20:55:34
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 20:55:34.390724   60933 out.go:345] Setting OutFile to fd 1 ...
	I1216 20:55:34.390973   60933 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:55:34.390982   60933 out.go:358] Setting ErrFile to fd 2...
	I1216 20:55:34.390986   60933 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:55:34.391166   60933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-7083/.minikube/bin
	I1216 20:55:34.391763   60933 out.go:352] Setting JSON to false
	I1216 20:55:34.392611   60933 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5879,"bootTime":1734376655,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 20:55:34.392675   60933 start.go:139] virtualization: kvm guest
	I1216 20:55:34.394822   60933 out.go:177] * [old-k8s-version-847766] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1216 20:55:34.396184   60933 notify.go:220] Checking for updates...
	I1216 20:55:34.396201   60933 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 20:55:34.397724   60933 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 20:55:34.399130   60933 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 20:55:34.400470   60933 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-7083/.minikube
	I1216 20:55:34.401934   60933 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 20:55:34.403341   60933 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 20:55:34.405179   60933 config.go:182] Loaded profile config "old-k8s-version-847766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1216 20:55:34.405571   60933 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:55:34.405650   60933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:55:34.421052   60933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41215
	I1216 20:55:34.421523   60933 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:55:34.422018   60933 main.go:141] libmachine: Using API Version  1
	I1216 20:55:34.422035   60933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:55:34.422373   60933 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:55:34.422646   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:55:34.424565   60933 out.go:177] * Kubernetes 1.32.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.0
	I1216 20:55:34.426088   60933 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 20:55:34.426419   60933 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:55:34.426474   60933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:55:34.441375   60933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36915
	I1216 20:55:34.441833   60933 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:55:34.442327   60933 main.go:141] libmachine: Using API Version  1
	I1216 20:55:34.442349   60933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:55:34.442658   60933 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:55:34.442852   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:55:34.480512   60933 out.go:177] * Using the kvm2 driver based on existing profile
	I1216 20:55:34.481972   60933 start.go:297] selected driver: kvm2
	I1216 20:55:34.481988   60933 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-847766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-847766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 20:55:34.482125   60933 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 20:55:34.482826   60933 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 20:55:34.482907   60933 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20091-7083/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1216 20:55:34.498561   60933 install.go:137] /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2 version is 1.34.0
	I1216 20:55:34.498953   60933 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 20:55:34.498981   60933 cni.go:84] Creating CNI manager for ""
	I1216 20:55:34.499022   60933 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 20:55:34.499060   60933 start.go:340] cluster config:
	{Name:old-k8s-version-847766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-847766 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 20:55:34.499164   60933 iso.go:125] acquiring lock: {Name:mk60ed2ba7ed00047edacd09f4f6bf84214f0831 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 20:55:34.501128   60933 out.go:177] * Starting "old-k8s-version-847766" primary control-plane node in "old-k8s-version-847766" cluster
	I1216 20:55:29.827520   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:55:32.899553   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:55:30.468027   60829 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 20:55:30.468071   60829 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1216 20:55:30.468079   60829 cache.go:56] Caching tarball of preloaded images
	I1216 20:55:30.468192   60829 preload.go:172] Found /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 20:55:30.468206   60829 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1216 20:55:30.468310   60829 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/config.json ...
	I1216 20:55:30.468540   60829 start.go:360] acquireMachinesLock for default-k8s-diff-port-327790: {Name:mk014ce1133f8d018fee1f78c9c31a354da6dd77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 20:55:34.502579   60933 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1216 20:55:34.502609   60933 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1216 20:55:34.502615   60933 cache.go:56] Caching tarball of preloaded images
	I1216 20:55:34.502716   60933 preload.go:172] Found /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 20:55:34.502731   60933 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1216 20:55:34.502823   60933 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/config.json ...
	I1216 20:55:34.503011   60933 start.go:360] acquireMachinesLock for old-k8s-version-847766: {Name:mk014ce1133f8d018fee1f78c9c31a354da6dd77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 20:55:38.979556   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:55:42.051532   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:55:48.131588   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:55:51.203568   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:55:57.283622   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:00.355490   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:06.435543   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:09.507559   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:15.587526   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:18.659657   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:24.739528   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:27.811498   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:33.891518   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:36.963554   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:43.043553   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:46.115578   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:52.195583   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:56:55.267507   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:01.347591   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:04.419562   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:10.499479   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:13.571540   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:19.651541   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:22.723545   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:28.803551   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:31.875527   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:37.955563   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:41.027520   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:47.107494   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:50.179566   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:56.259550   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:57:59.331540   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:05.411562   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:08.483592   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:14.563574   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:17.635522   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:23.715548   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:26.787559   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:32.867539   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:35.939502   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:42.019562   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:45.091545   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:51.171521   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:54.243542   60215 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.151:22: connect: no route to host
	I1216 20:58:57.248710   60421 start.go:364] duration metric: took 4m14.403979547s to acquireMachinesLock for "no-preload-232338"
	I1216 20:58:57.248796   60421 start.go:96] Skipping create...Using existing machine configuration
	I1216 20:58:57.248804   60421 fix.go:54] fixHost starting: 
	I1216 20:58:57.249232   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:58:57.249288   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:58:57.264905   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39773
	I1216 20:58:57.265423   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:58:57.265982   60421 main.go:141] libmachine: Using API Version  1
	I1216 20:58:57.266005   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:58:57.266396   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:58:57.266636   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:58:57.266807   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetState
	I1216 20:58:57.268705   60421 fix.go:112] recreateIfNeeded on no-preload-232338: state=Stopped err=<nil>
	I1216 20:58:57.268730   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	W1216 20:58:57.268918   60421 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 20:58:57.270855   60421 out.go:177] * Restarting existing kvm2 VM for "no-preload-232338" ...
	I1216 20:58:57.272142   60421 main.go:141] libmachine: (no-preload-232338) Calling .Start
	I1216 20:58:57.272374   60421 main.go:141] libmachine: (no-preload-232338) Ensuring networks are active...
	I1216 20:58:57.273245   60421 main.go:141] libmachine: (no-preload-232338) Ensuring network default is active
	I1216 20:58:57.273660   60421 main.go:141] libmachine: (no-preload-232338) Ensuring network mk-no-preload-232338 is active
	I1216 20:58:57.274025   60421 main.go:141] libmachine: (no-preload-232338) Getting domain xml...
	I1216 20:58:57.274673   60421 main.go:141] libmachine: (no-preload-232338) Creating domain...
	I1216 20:58:57.245632   60215 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 20:58:57.245753   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetMachineName
	I1216 20:58:57.246111   60215 buildroot.go:166] provisioning hostname "embed-certs-606219"
	I1216 20:58:57.246149   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetMachineName
	I1216 20:58:57.246399   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 20:58:57.248517   60215 machine.go:96] duration metric: took 4m37.414570479s to provisionDockerMachine
	I1216 20:58:57.248579   60215 fix.go:56] duration metric: took 4m37.437232743s for fixHost
	I1216 20:58:57.248587   60215 start.go:83] releasing machines lock for "embed-certs-606219", held for 4m37.437262865s
	W1216 20:58:57.248614   60215 start.go:714] error starting host: provision: host is not running
	W1216 20:58:57.248791   60215 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1216 20:58:57.248801   60215 start.go:729] Will try again in 5 seconds ...
	I1216 20:58:58.506521   60421 main.go:141] libmachine: (no-preload-232338) Waiting to get IP...
	I1216 20:58:58.507302   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:58:58.507627   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:58:58.507699   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:58:58.507613   61660 retry.go:31] will retry after 230.281045ms: waiting for machine to come up
	I1216 20:58:58.739343   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:58:58.739781   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:58:58.739804   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:58:58.739741   61660 retry.go:31] will retry after 323.962271ms: waiting for machine to come up
	I1216 20:58:59.065388   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:58:59.065856   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:58:59.065884   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:58:59.065816   61660 retry.go:31] will retry after 364.058481ms: waiting for machine to come up
	I1216 20:58:59.431290   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:58:59.431680   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:58:59.431707   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:58:59.431631   61660 retry.go:31] will retry after 569.845721ms: waiting for machine to come up
	I1216 20:59:00.003562   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:00.004030   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:00.004093   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:00.003988   61660 retry.go:31] will retry after 728.729909ms: waiting for machine to come up
	I1216 20:59:00.733954   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:00.734450   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:00.734482   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:00.734388   61660 retry.go:31] will retry after 679.479889ms: waiting for machine to come up
	I1216 20:59:01.415289   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:01.415739   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:01.415763   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:01.415690   61660 retry.go:31] will retry after 1.136560245s: waiting for machine to come up
	I1216 20:59:02.554094   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:02.554523   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:02.554548   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:02.554470   61660 retry.go:31] will retry after 1.299578742s: waiting for machine to come up
	I1216 20:59:02.250499   60215 start.go:360] acquireMachinesLock for embed-certs-606219: {Name:mk014ce1133f8d018fee1f78c9c31a354da6dd77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 20:59:03.855999   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:03.856366   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:03.856399   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:03.856300   61660 retry.go:31] will retry after 1.761269163s: waiting for machine to come up
	I1216 20:59:05.620383   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:05.620837   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:05.620858   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:05.620818   61660 retry.go:31] will retry after 2.100894301s: waiting for machine to come up
	I1216 20:59:07.723931   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:07.724300   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:07.724322   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:07.724273   61660 retry.go:31] will retry after 2.57501483s: waiting for machine to come up
	I1216 20:59:10.302185   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:10.302766   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:10.302802   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:10.302706   61660 retry.go:31] will retry after 2.68456895s: waiting for machine to come up
	I1216 20:59:17.060397   60829 start.go:364] duration metric: took 3m46.591813882s to acquireMachinesLock for "default-k8s-diff-port-327790"
	I1216 20:59:17.060456   60829 start.go:96] Skipping create...Using existing machine configuration
	I1216 20:59:17.060462   60829 fix.go:54] fixHost starting: 
	I1216 20:59:17.060878   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:59:17.060935   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:59:17.079226   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41365
	I1216 20:59:17.079715   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:59:17.080173   60829 main.go:141] libmachine: Using API Version  1
	I1216 20:59:17.080202   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:59:17.080554   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:59:17.080731   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:59:17.080873   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetState
	I1216 20:59:17.082368   60829 fix.go:112] recreateIfNeeded on default-k8s-diff-port-327790: state=Stopped err=<nil>
	I1216 20:59:17.082399   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	W1216 20:59:17.082570   60829 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 20:59:17.085104   60829 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-327790" ...
	I1216 20:59:12.988787   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:12.989140   60421 main.go:141] libmachine: (no-preload-232338) DBG | unable to find current IP address of domain no-preload-232338 in network mk-no-preload-232338
	I1216 20:59:12.989172   60421 main.go:141] libmachine: (no-preload-232338) DBG | I1216 20:59:12.989098   61660 retry.go:31] will retry after 2.793178881s: waiting for machine to come up
	I1216 20:59:15.786011   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.786518   60421 main.go:141] libmachine: (no-preload-232338) Found IP for machine: 192.168.50.240
	I1216 20:59:15.786540   60421 main.go:141] libmachine: (no-preload-232338) Reserving static IP address...
	I1216 20:59:15.786564   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has current primary IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.786948   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "no-preload-232338", mac: "52:54:00:07:00:29", ip: "192.168.50.240"} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:15.786983   60421 main.go:141] libmachine: (no-preload-232338) DBG | skip adding static IP to network mk-no-preload-232338 - found existing host DHCP lease matching {name: "no-preload-232338", mac: "52:54:00:07:00:29", ip: "192.168.50.240"}
	I1216 20:59:15.786995   60421 main.go:141] libmachine: (no-preload-232338) Reserved static IP address: 192.168.50.240
	I1216 20:59:15.787009   60421 main.go:141] libmachine: (no-preload-232338) Waiting for SSH to be available...
	I1216 20:59:15.787022   60421 main.go:141] libmachine: (no-preload-232338) DBG | Getting to WaitForSSH function...
	I1216 20:59:15.789175   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.789509   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:15.789542   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.789633   60421 main.go:141] libmachine: (no-preload-232338) DBG | Using SSH client type: external
	I1216 20:59:15.789674   60421 main.go:141] libmachine: (no-preload-232338) DBG | Using SSH private key: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa (-rw-------)
	I1216 20:59:15.789709   60421 main.go:141] libmachine: (no-preload-232338) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1216 20:59:15.789718   60421 main.go:141] libmachine: (no-preload-232338) DBG | About to run SSH command:
	I1216 20:59:15.789726   60421 main.go:141] libmachine: (no-preload-232338) DBG | exit 0
	I1216 20:59:15.915980   60421 main.go:141] libmachine: (no-preload-232338) DBG | SSH cmd err, output: <nil>: 
	I1216 20:59:15.916473   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetConfigRaw
	I1216 20:59:15.917088   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetIP
	I1216 20:59:15.919782   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.920161   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:15.920192   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.920408   60421 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/config.json ...
	I1216 20:59:15.920636   60421 machine.go:93] provisionDockerMachine start ...
	I1216 20:59:15.920654   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:59:15.920875   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:15.923221   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.923623   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:15.923650   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:15.923784   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:15.923971   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:15.924107   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:15.924246   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:15.924395   60421 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:15.924715   60421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.240 22 <nil> <nil>}
	I1216 20:59:15.924729   60421 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 20:59:16.032079   60421 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1216 20:59:16.032108   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetMachineName
	I1216 20:59:16.032397   60421 buildroot.go:166] provisioning hostname "no-preload-232338"
	I1216 20:59:16.032423   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetMachineName
	I1216 20:59:16.032649   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:16.035467   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.035798   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.035826   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.036003   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:16.036184   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.036335   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.036494   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:16.036679   60421 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:16.036847   60421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.240 22 <nil> <nil>}
	I1216 20:59:16.036859   60421 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-232338 && echo "no-preload-232338" | sudo tee /etc/hostname
	I1216 20:59:16.161958   60421 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-232338
	
	I1216 20:59:16.161996   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:16.164585   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.165086   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.165130   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.165370   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:16.165578   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.165746   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.165877   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:16.166015   60421 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:16.166188   60421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.240 22 <nil> <nil>}
	I1216 20:59:16.166204   60421 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-232338' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-232338/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-232338' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 20:59:16.285329   60421 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 20:59:16.285374   60421 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20091-7083/.minikube CaCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20091-7083/.minikube}
	I1216 20:59:16.285407   60421 buildroot.go:174] setting up certificates
	I1216 20:59:16.285422   60421 provision.go:84] configureAuth start
	I1216 20:59:16.285432   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetMachineName
	I1216 20:59:16.285764   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetIP
	I1216 20:59:16.288773   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.289161   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.289192   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.289405   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:16.291687   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.292042   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.292072   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.292190   60421 provision.go:143] copyHostCerts
	I1216 20:59:16.292260   60421 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem, removing ...
	I1216 20:59:16.292274   60421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem
	I1216 20:59:16.292343   60421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem (1123 bytes)
	I1216 20:59:16.292470   60421 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem, removing ...
	I1216 20:59:16.292481   60421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem
	I1216 20:59:16.292508   60421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem (1679 bytes)
	I1216 20:59:16.292563   60421 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem, removing ...
	I1216 20:59:16.292570   60421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem
	I1216 20:59:16.292590   60421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem (1082 bytes)
	I1216 20:59:16.292649   60421 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem org=jenkins.no-preload-232338 san=[127.0.0.1 192.168.50.240 localhost minikube no-preload-232338]
	I1216 20:59:16.407096   60421 provision.go:177] copyRemoteCerts
	I1216 20:59:16.407187   60421 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 20:59:16.407227   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:16.410400   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.410725   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.410755   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.410977   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:16.411188   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.411437   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:16.411618   60421 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 20:59:16.498456   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 20:59:16.525297   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 20:59:16.551135   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 20:59:16.576040   60421 provision.go:87] duration metric: took 290.601941ms to configureAuth
	I1216 20:59:16.576074   60421 buildroot.go:189] setting minikube options for container-runtime
	I1216 20:59:16.576288   60421 config.go:182] Loaded profile config "no-preload-232338": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 20:59:16.576396   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:16.579169   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.579607   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.579641   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.579795   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:16.580016   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.580165   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.580311   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:16.580467   60421 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:16.580629   60421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.240 22 <nil> <nil>}
	I1216 20:59:16.580643   60421 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 20:59:16.816973   60421 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 20:59:16.816998   60421 machine.go:96] duration metric: took 896.349056ms to provisionDockerMachine
	I1216 20:59:16.817010   60421 start.go:293] postStartSetup for "no-preload-232338" (driver="kvm2")
	I1216 20:59:16.817030   60421 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 20:59:16.817044   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:59:16.817427   60421 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 20:59:16.817454   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:16.820182   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.820550   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.820578   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.820713   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:16.820914   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.821096   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:16.821274   60421 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 20:59:16.906513   60421 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 20:59:16.911314   60421 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 20:59:16.911346   60421 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/addons for local assets ...
	I1216 20:59:16.911482   60421 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/files for local assets ...
	I1216 20:59:16.911589   60421 filesync.go:149] local asset: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem -> 142542.pem in /etc/ssl/certs
	I1216 20:59:16.911720   60421 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 20:59:16.921890   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /etc/ssl/certs/142542.pem (1708 bytes)
	I1216 20:59:16.947114   60421 start.go:296] duration metric: took 130.089628ms for postStartSetup
	I1216 20:59:16.947192   60421 fix.go:56] duration metric: took 19.698385497s for fixHost
	I1216 20:59:16.947229   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:16.950156   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.950543   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:16.950575   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:16.950780   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:16.950996   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.951199   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:16.951394   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:16.951604   60421 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:16.951829   60421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.240 22 <nil> <nil>}
	I1216 20:59:16.951843   60421 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 20:59:17.060233   60421 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734382757.032597424
	
	I1216 20:59:17.060258   60421 fix.go:216] guest clock: 1734382757.032597424
	I1216 20:59:17.060265   60421 fix.go:229] Guest: 2024-12-16 20:59:17.032597424 +0000 UTC Remote: 2024-12-16 20:59:16.947203535 +0000 UTC m=+274.247918927 (delta=85.393889ms)
	I1216 20:59:17.060290   60421 fix.go:200] guest clock delta is within tolerance: 85.393889ms
	I1216 20:59:17.060294   60421 start.go:83] releasing machines lock for "no-preload-232338", held for 19.811539815s
	I1216 20:59:17.060318   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:59:17.060636   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetIP
	I1216 20:59:17.063346   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:17.063742   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:17.063764   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:17.063900   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:59:17.064419   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:59:17.064647   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 20:59:17.064766   60421 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 20:59:17.064804   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:17.064897   60421 ssh_runner.go:195] Run: cat /version.json
	I1216 20:59:17.064923   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 20:59:17.067687   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:17.067897   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:17.068129   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:17.068166   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:17.068314   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:17.068318   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:17.068343   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:17.068491   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 20:59:17.068573   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:17.068754   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:17.068778   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 20:59:17.068914   60421 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 20:59:17.069085   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 20:59:17.069229   60421 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 20:59:17.149502   60421 ssh_runner.go:195] Run: systemctl --version
	I1216 20:59:17.184981   60421 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 20:59:17.335267   60421 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 20:59:17.344316   60421 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 20:59:17.344381   60421 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 20:59:17.362422   60421 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 20:59:17.362450   60421 start.go:495] detecting cgroup driver to use...
	I1216 20:59:17.362526   60421 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 20:59:17.379285   60421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 20:59:17.394451   60421 docker.go:217] disabling cri-docker service (if available) ...
	I1216 20:59:17.394514   60421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 20:59:17.411856   60421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 20:59:17.428028   60421 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 20:59:17.557602   60421 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 20:59:17.699140   60421 docker.go:233] disabling docker service ...
	I1216 20:59:17.699215   60421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 20:59:17.715236   60421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 20:59:17.729268   60421 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 20:59:17.875729   60421 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 20:59:18.007569   60421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 20:59:18.022940   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 20:59:18.042227   60421 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1216 20:59:18.042292   60421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:18.053011   60421 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 20:59:18.053081   60421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:18.063767   60421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:18.074262   60421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:18.085372   60421 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 20:59:18.098366   60421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:18.113619   60421 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:18.134081   60421 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:18.145276   60421 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 20:59:18.155733   60421 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 20:59:18.155806   60421 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 20:59:18.170492   60421 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 20:59:18.182276   60421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:59:18.291278   60421 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 20:59:18.384618   60421 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 20:59:18.384700   60421 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 20:59:18.390755   60421 start.go:563] Will wait 60s for crictl version
	I1216 20:59:18.390823   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:18.395435   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 20:59:18.439300   60421 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 20:59:18.439390   60421 ssh_runner.go:195] Run: crio --version
	I1216 20:59:18.473976   60421 ssh_runner.go:195] Run: crio --version
	I1216 20:59:18.505262   60421 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1216 20:59:17.086569   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Start
	I1216 20:59:17.086752   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Ensuring networks are active...
	I1216 20:59:17.087656   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Ensuring network default is active
	I1216 20:59:17.088082   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Ensuring network mk-default-k8s-diff-port-327790 is active
	I1216 20:59:17.088482   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Getting domain xml...
	I1216 20:59:17.089219   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Creating domain...
	I1216 20:59:18.413245   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting to get IP...
	I1216 20:59:18.414327   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:18.414794   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:18.414907   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:18.414784   61807 retry.go:31] will retry after 229.952775ms: waiting for machine to come up
	I1216 20:59:18.646270   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:18.646677   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:18.646727   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:18.646654   61807 retry.go:31] will retry after 341.342128ms: waiting for machine to come up
	I1216 20:59:18.989285   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:18.989781   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:18.989809   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:18.989740   61807 retry.go:31] will retry after 311.937657ms: waiting for machine to come up
	I1216 20:59:19.303619   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:19.304189   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:19.304221   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:19.304131   61807 retry.go:31] will retry after 515.638431ms: waiting for machine to come up
	I1216 20:59:19.821478   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:19.821955   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:19.821997   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:19.821900   61807 retry.go:31] will retry after 590.835789ms: waiting for machine to come up
	I1216 20:59:18.506840   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetIP
	I1216 20:59:18.510260   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:18.510654   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 20:59:18.510689   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 20:59:18.510875   60421 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1216 20:59:18.515632   60421 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 20:59:18.529943   60421 kubeadm.go:883] updating cluster {Name:no-preload-232338 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.32.0 ClusterName:no-preload-232338 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.240 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 20:59:18.530128   60421 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 20:59:18.530184   60421 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 20:59:18.569526   60421 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1216 20:59:18.569555   60421 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.32.0 registry.k8s.io/kube-controller-manager:v1.32.0 registry.k8s.io/kube-scheduler:v1.32.0 registry.k8s.io/kube-proxy:v1.32.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.16-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1216 20:59:18.569650   60421 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:59:18.569669   60421 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.16-0
	I1216 20:59:18.569688   60421 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1216 20:59:18.569651   60421 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.32.0
	I1216 20:59:18.569774   60421 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.32.0
	I1216 20:59:18.569859   60421 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 20:59:18.569859   60421 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1216 20:59:18.570294   60421 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.32.0
	I1216 20:59:18.571577   60421 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.32.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.32.0
	I1216 20:59:18.571602   60421 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.16-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.16-0
	I1216 20:59:18.571582   60421 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:59:18.571585   60421 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.32.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.32.0
	I1216 20:59:18.571583   60421 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1216 20:59:18.571580   60421 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.32.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 20:59:18.571828   60421 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.32.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.32.0
	I1216 20:59:18.571953   60421 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1216 20:59:18.781052   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.32.0
	I1216 20:59:18.783569   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.16-0
	I1216 20:59:18.795901   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 20:59:18.799273   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1216 20:59:18.801098   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.32.0
	I1216 20:59:18.802163   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1216 20:59:18.828334   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.32.0
	I1216 20:59:18.897880   60421 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.32.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.32.0" does not exist at hash "a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5" in container runtime
	I1216 20:59:18.897942   60421 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.32.0
	I1216 20:59:18.898003   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:18.910616   60421 cache_images.go:116] "registry.k8s.io/etcd:3.5.16-0" needs transfer: "registry.k8s.io/etcd:3.5.16-0" does not exist at hash "a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc" in container runtime
	I1216 20:59:18.910665   60421 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.16-0
	I1216 20:59:18.910713   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:18.937699   60421 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.32.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.32.0" does not exist at hash "8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3" in container runtime
	I1216 20:59:18.937753   60421 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 20:59:18.937804   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:18.979455   60421 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.32.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.32.0" does not exist at hash "c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4" in container runtime
	I1216 20:59:18.979500   60421 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.32.0
	I1216 20:59:18.979540   60421 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1216 20:59:18.979555   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:18.979586   60421 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1216 20:59:18.979636   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:19.002472   60421 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:59:19.076177   60421 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.32.0" needs transfer: "registry.k8s.io/kube-proxy:v1.32.0" does not exist at hash "040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08" in container runtime
	I1216 20:59:19.076217   60421 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.32.0
	I1216 20:59:19.076237   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.0
	I1216 20:59:19.076252   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:19.076292   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I1216 20:59:19.076351   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 20:59:19.076408   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1216 20:59:19.076487   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.0
	I1216 20:59:19.076511   60421 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1216 20:59:19.076536   60421 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:59:19.076580   60421 ssh_runner.go:195] Run: which crictl
	I1216 20:59:19.204766   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:59:19.204846   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1216 20:59:19.204904   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.0
	I1216 20:59:19.204959   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.0
	I1216 20:59:19.205097   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.0
	I1216 20:59:19.205212   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I1216 20:59:19.205285   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 20:59:19.365421   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.0
	I1216 20:59:19.365466   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:59:19.365512   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1216 20:59:19.365620   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.32.0
	I1216 20:59:19.365652   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.32.0
	I1216 20:59:19.365771   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.32.0
	I1216 20:59:19.365861   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.16-0
	I1216 20:59:19.539614   60421 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1216 20:59:19.539729   60421 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1216 20:59:19.539740   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.32.0
	I1216 20:59:19.539740   60421 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.0
	I1216 20:59:19.539817   60421 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.0
	I1216 20:59:19.539839   60421 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.32.0
	I1216 20:59:19.539840   60421 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 20:59:19.539885   60421 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.32.0
	I1216 20:59:19.539949   60421 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.0
	I1216 20:59:19.540000   60421 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0
	I1216 20:59:19.540029   60421 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.32.0
	I1216 20:59:19.540062   60421 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.16-0
	I1216 20:59:19.555043   60421 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.32.0 (exists)
	I1216 20:59:19.555076   60421 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.32.0
	I1216 20:59:19.555135   60421 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.32.0
	I1216 20:59:19.555251   60421 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1216 20:59:19.630857   60421 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.16-0 (exists)
	I1216 20:59:19.630949   60421 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1216 20:59:19.630983   60421 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.0
	I1216 20:59:19.631030   60421 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.32.0 (exists)
	I1216 20:59:19.631065   60421 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.32.0
	I1216 20:59:19.631104   60421 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.32.0 (exists)
	I1216 20:59:19.631069   60421 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1216 20:59:21.838285   60421 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.32.0: (2.283119694s)
	I1216 20:59:21.838328   60421 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.0 from cache
	I1216 20:59:21.838359   60421 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1216 20:59:21.838394   60421 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.20725659s)
	I1216 20:59:21.838414   60421 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1216 20:59:21.838421   60421 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1216 20:59:21.838361   60421 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.32.0: (2.207274997s)
	I1216 20:59:21.838471   60421 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.32.0 (exists)
	I1216 20:59:20.414932   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:20.415565   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:20.415597   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:20.415502   61807 retry.go:31] will retry after 698.152518ms: waiting for machine to come up
	I1216 20:59:21.115103   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:21.115597   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:21.115627   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:21.115543   61807 retry.go:31] will retry after 891.02308ms: waiting for machine to come up
	I1216 20:59:22.008636   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:22.009070   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:22.009098   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:22.009015   61807 retry.go:31] will retry after 923.634312ms: waiting for machine to come up
	I1216 20:59:22.934238   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:22.934753   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:22.934784   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:22.934697   61807 retry.go:31] will retry after 1.142718367s: waiting for machine to come up
	I1216 20:59:24.078935   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:24.079398   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:24.079429   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:24.079363   61807 retry.go:31] will retry after 1.541033224s: waiting for machine to come up
	I1216 20:59:23.901058   60421 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.062611423s)
	I1216 20:59:23.901091   60421 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1216 20:59:23.901122   60421 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.16-0
	I1216 20:59:23.901169   60421 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.16-0
	I1216 20:59:25.621932   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:25.622401   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:25.622433   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:25.622364   61807 retry.go:31] will retry after 2.600280234s: waiting for machine to come up
	I1216 20:59:28.224296   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:28.224874   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:28.224892   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:28.224828   61807 retry.go:31] will retry after 3.308841216s: waiting for machine to come up
	I1216 20:59:27.793238   60421 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.16-0: (3.892042799s)
	I1216 20:59:27.793280   60421 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 from cache
	I1216 20:59:27.793321   60421 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.32.0
	I1216 20:59:27.793420   60421 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.32.0
	I1216 20:59:29.552069   60421 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.32.0: (1.758623471s)
	I1216 20:59:29.552102   60421 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.0 from cache
	I1216 20:59:29.552130   60421 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.32.0
	I1216 20:59:29.552177   60421 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.32.0
	I1216 20:59:31.708930   60421 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.32.0: (2.156719559s)
	I1216 20:59:31.708971   60421 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.0 from cache
	I1216 20:59:31.709008   60421 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1216 20:59:31.709057   60421 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1216 20:59:32.660657   60421 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1216 20:59:32.660713   60421 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.32.0
	I1216 20:59:32.660775   60421 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.32.0
	I1216 20:59:31.537153   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:31.537735   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | unable to find current IP address of domain default-k8s-diff-port-327790 in network mk-default-k8s-diff-port-327790
	I1216 20:59:31.537795   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | I1216 20:59:31.537710   61807 retry.go:31] will retry after 4.259700282s: waiting for machine to come up
	I1216 20:59:37.140408   60933 start.go:364] duration metric: took 4m2.637362394s to acquireMachinesLock for "old-k8s-version-847766"
	I1216 20:59:37.140483   60933 start.go:96] Skipping create...Using existing machine configuration
	I1216 20:59:37.140491   60933 fix.go:54] fixHost starting: 
	I1216 20:59:37.140933   60933 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:59:37.140988   60933 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:59:37.159075   60933 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39873
	I1216 20:59:37.159574   60933 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:59:37.160140   60933 main.go:141] libmachine: Using API Version  1
	I1216 20:59:37.160172   60933 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:59:37.160560   60933 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:59:37.160773   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:37.160889   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetState
	I1216 20:59:37.162561   60933 fix.go:112] recreateIfNeeded on old-k8s-version-847766: state=Stopped err=<nil>
	I1216 20:59:37.162603   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	W1216 20:59:37.162755   60933 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 20:59:37.166031   60933 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-847766" ...
	I1216 20:59:34.634064   60421 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.32.0: (1.973261206s)
	I1216 20:59:34.634117   60421 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.0 from cache
	I1216 20:59:34.634154   60421 cache_images.go:123] Successfully loaded all cached images
	I1216 20:59:34.634160   60421 cache_images.go:92] duration metric: took 16.064590407s to LoadCachedImages
	I1216 20:59:34.634171   60421 kubeadm.go:934] updating node { 192.168.50.240 8443 v1.32.0 crio true true} ...
	I1216 20:59:34.634331   60421 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-232338 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:no-preload-232338 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 20:59:34.634420   60421 ssh_runner.go:195] Run: crio config
	I1216 20:59:34.688034   60421 cni.go:84] Creating CNI manager for ""
	I1216 20:59:34.688059   60421 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 20:59:34.688068   60421 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 20:59:34.688093   60421 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.240 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-232338 NodeName:no-preload-232338 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 20:59:34.688277   60421 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-232338"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.240"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.240"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 20:59:34.688356   60421 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1216 20:59:34.699709   60421 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 20:59:34.699784   60421 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 20:59:34.710306   60421 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1216 20:59:34.732401   60421 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 20:59:34.757561   60421 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1216 20:59:34.776094   60421 ssh_runner.go:195] Run: grep 192.168.50.240	control-plane.minikube.internal$ /etc/hosts
	I1216 20:59:34.780341   60421 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.240	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 20:59:34.794025   60421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:59:34.930543   60421 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 20:59:34.948720   60421 certs.go:68] Setting up /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338 for IP: 192.168.50.240
	I1216 20:59:34.948752   60421 certs.go:194] generating shared ca certs ...
	I1216 20:59:34.948776   60421 certs.go:226] acquiring lock for ca certs: {Name:mk7f8f83a04be3d39897a025f51d4d8228b5a509 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:59:34.949035   60421 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key
	I1216 20:59:34.949094   60421 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key
	I1216 20:59:34.949115   60421 certs.go:256] generating profile certs ...
	I1216 20:59:34.949243   60421 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/client.key
	I1216 20:59:34.949327   60421 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/apiserver.key.674e04e3
	I1216 20:59:34.949379   60421 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/proxy-client.key
	I1216 20:59:34.949509   60421 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem (1338 bytes)
	W1216 20:59:34.949547   60421 certs.go:480] ignoring /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254_empty.pem, impossibly tiny 0 bytes
	I1216 20:59:34.949557   60421 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 20:59:34.949582   60421 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem (1082 bytes)
	I1216 20:59:34.949604   60421 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem (1123 bytes)
	I1216 20:59:34.949627   60421 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem (1679 bytes)
	I1216 20:59:34.949662   60421 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem (1708 bytes)
	I1216 20:59:34.950648   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 20:59:34.994491   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 20:59:35.029853   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 20:59:35.058834   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 20:59:35.096870   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 20:59:35.126467   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 20:59:35.160826   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 20:59:35.186344   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 20:59:35.211125   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /usr/share/ca-certificates/142542.pem (1708 bytes)
	I1216 20:59:35.238705   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 20:59:35.266485   60421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem --> /usr/share/ca-certificates/14254.pem (1338 bytes)
	I1216 20:59:35.291729   60421 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 20:59:35.311939   60421 ssh_runner.go:195] Run: openssl version
	I1216 20:59:35.318397   60421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142542.pem && ln -fs /usr/share/ca-certificates/142542.pem /etc/ssl/certs/142542.pem"
	I1216 20:59:35.332081   60421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142542.pem
	I1216 20:59:35.336967   60421 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 19:42 /usr/share/ca-certificates/142542.pem
	I1216 20:59:35.337022   60421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142542.pem
	I1216 20:59:35.343307   60421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142542.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 20:59:35.356515   60421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 20:59:35.370380   60421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:59:35.375538   60421 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:59:35.375589   60421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:59:35.381736   60421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 20:59:35.395677   60421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14254.pem && ln -fs /usr/share/ca-certificates/14254.pem /etc/ssl/certs/14254.pem"
	I1216 20:59:35.409029   60421 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14254.pem
	I1216 20:59:35.414358   60421 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 19:42 /usr/share/ca-certificates/14254.pem
	I1216 20:59:35.414427   60421 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14254.pem
	I1216 20:59:35.421352   60421 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14254.pem /etc/ssl/certs/51391683.0"
	I1216 20:59:35.435322   60421 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 20:59:35.440479   60421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 20:59:35.447408   60421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 20:59:35.453992   60421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 20:59:35.460713   60421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 20:59:35.467109   60421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 20:59:35.473412   60421 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 20:59:35.479720   60421 kubeadm.go:392] StartCluster: {Name:no-preload-232338 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32
.0 ClusterName:no-preload-232338 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.240 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 20:59:35.479824   60421 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 20:59:35.479901   60421 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 20:59:35.521238   60421 cri.go:89] found id: ""
	I1216 20:59:35.521331   60421 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 20:59:35.534818   60421 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1216 20:59:35.534848   60421 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1216 20:59:35.534893   60421 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 20:59:35.547460   60421 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 20:59:35.548501   60421 kubeconfig.go:125] found "no-preload-232338" server: "https://192.168.50.240:8443"
	I1216 20:59:35.550575   60421 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 20:59:35.560957   60421 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.240
	I1216 20:59:35.561018   60421 kubeadm.go:1160] stopping kube-system containers ...
	I1216 20:59:35.561033   60421 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 20:59:35.561094   60421 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 20:59:35.598970   60421 cri.go:89] found id: ""
	I1216 20:59:35.599082   60421 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 20:59:35.618027   60421 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 20:59:35.629418   60421 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 20:59:35.629455   60421 kubeadm.go:157] found existing configuration files:
	
	I1216 20:59:35.629501   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 20:59:35.639825   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 20:59:35.639896   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 20:59:35.650676   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 20:59:35.662171   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 20:59:35.662228   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 20:59:35.674780   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 20:59:35.686565   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 20:59:35.686640   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 20:59:35.698956   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 20:59:35.710813   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 20:59:35.710874   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 20:59:35.723307   60421 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 20:59:35.734712   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:35.863375   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:37.021512   60421 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.158099337s)
	I1216 20:59:37.021546   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:37.269641   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:37.348978   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:37.428210   60421 api_server.go:52] waiting for apiserver process to appear ...
	I1216 20:59:37.428296   60421 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:59:35.800344   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.800861   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Found IP for machine: 192.168.39.162
	I1216 20:59:35.800889   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has current primary IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.800899   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Reserving static IP address...
	I1216 20:59:35.801367   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-327790", mac: "52:54:00:68:47:d5", ip: "192.168.39.162"} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:35.801395   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Reserved static IP address: 192.168.39.162
	I1216 20:59:35.801419   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | skip adding static IP to network mk-default-k8s-diff-port-327790 - found existing host DHCP lease matching {name: "default-k8s-diff-port-327790", mac: "52:54:00:68:47:d5", ip: "192.168.39.162"}
	I1216 20:59:35.801439   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Waiting for SSH to be available...
	I1216 20:59:35.801452   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Getting to WaitForSSH function...
	I1216 20:59:35.803875   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.804226   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:35.804257   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.804407   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Using SSH client type: external
	I1216 20:59:35.804439   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Using SSH private key: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa (-rw-------)
	I1216 20:59:35.804472   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.162 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1216 20:59:35.804493   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | About to run SSH command:
	I1216 20:59:35.804517   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | exit 0
	I1216 20:59:35.935325   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | SSH cmd err, output: <nil>: 
	I1216 20:59:35.935765   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetConfigRaw
	I1216 20:59:35.936442   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetIP
	I1216 20:59:35.938945   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.939369   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:35.939395   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.939654   60829 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/config.json ...
	I1216 20:59:35.939915   60829 machine.go:93] provisionDockerMachine start ...
	I1216 20:59:35.939938   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:59:35.940183   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:35.942412   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.942758   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:35.942787   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:35.942885   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:35.943067   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:35.943205   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:35.943347   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:35.943501   60829 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:35.943687   60829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1216 20:59:35.943697   60829 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 20:59:36.060257   60829 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1216 20:59:36.060297   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetMachineName
	I1216 20:59:36.060608   60829 buildroot.go:166] provisioning hostname "default-k8s-diff-port-327790"
	I1216 20:59:36.060634   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetMachineName
	I1216 20:59:36.060853   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:36.063758   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.064060   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:36.064097   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.064222   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:36.064427   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.064600   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.064745   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:36.064910   60829 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:36.065132   60829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1216 20:59:36.065151   60829 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-327790 && echo "default-k8s-diff-port-327790" | sudo tee /etc/hostname
	I1216 20:59:36.194522   60829 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-327790
	
	I1216 20:59:36.194555   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:36.197422   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.197770   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:36.197818   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.198007   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:36.198217   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.198446   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.198606   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:36.198803   60829 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:36.199037   60829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1216 20:59:36.199062   60829 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-327790' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-327790/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-327790' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 20:59:36.320779   60829 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 20:59:36.320808   60829 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20091-7083/.minikube CaCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20091-7083/.minikube}
	I1216 20:59:36.320833   60829 buildroot.go:174] setting up certificates
	I1216 20:59:36.320845   60829 provision.go:84] configureAuth start
	I1216 20:59:36.320854   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetMachineName
	I1216 20:59:36.321171   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetIP
	I1216 20:59:36.323701   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.324019   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:36.324044   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.324254   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:36.326002   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.326317   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:36.326348   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.326478   60829 provision.go:143] copyHostCerts
	I1216 20:59:36.326555   60829 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem, removing ...
	I1216 20:59:36.326567   60829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem
	I1216 20:59:36.326635   60829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem (1082 bytes)
	I1216 20:59:36.326747   60829 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem, removing ...
	I1216 20:59:36.326759   60829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem
	I1216 20:59:36.326786   60829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem (1123 bytes)
	I1216 20:59:36.326856   60829 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem, removing ...
	I1216 20:59:36.326866   60829 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem
	I1216 20:59:36.326887   60829 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem (1679 bytes)
	I1216 20:59:36.326949   60829 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-327790 san=[127.0.0.1 192.168.39.162 default-k8s-diff-port-327790 localhost minikube]
	I1216 20:59:36.480215   60829 provision.go:177] copyRemoteCerts
	I1216 20:59:36.480278   60829 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 20:59:36.480304   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:36.482859   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.483213   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:36.483258   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.483500   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:36.483712   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.483903   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:36.484087   60829 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 20:59:36.571252   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 20:59:36.599399   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1216 20:59:36.624194   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 20:59:36.649294   60829 provision.go:87] duration metric: took 328.437433ms to configureAuth
	I1216 20:59:36.649325   60829 buildroot.go:189] setting minikube options for container-runtime
	I1216 20:59:36.649494   60829 config.go:182] Loaded profile config "default-k8s-diff-port-327790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 20:59:36.649567   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:36.652411   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.652838   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:36.652868   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.653006   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:36.653264   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.653490   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.653704   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:36.653879   60829 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:36.654059   60829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1216 20:59:36.654076   60829 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 20:59:36.893006   60829 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 20:59:36.893043   60829 machine.go:96] duration metric: took 953.113126ms to provisionDockerMachine
	I1216 20:59:36.893057   60829 start.go:293] postStartSetup for "default-k8s-diff-port-327790" (driver="kvm2")
	I1216 20:59:36.893070   60829 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 20:59:36.893101   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:59:36.893466   60829 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 20:59:36.893494   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:36.896151   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.896531   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:36.896561   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:36.896683   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:36.896893   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:36.897100   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:36.897280   60829 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 20:59:36.982077   60829 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 20:59:36.986598   60829 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 20:59:36.986624   60829 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/addons for local assets ...
	I1216 20:59:36.986702   60829 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/files for local assets ...
	I1216 20:59:36.986795   60829 filesync.go:149] local asset: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem -> 142542.pem in /etc/ssl/certs
	I1216 20:59:36.986919   60829 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 20:59:36.996453   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /etc/ssl/certs/142542.pem (1708 bytes)
	I1216 20:59:37.021838   60829 start.go:296] duration metric: took 128.770799ms for postStartSetup
	I1216 20:59:37.021873   60829 fix.go:56] duration metric: took 19.961410312s for fixHost
	I1216 20:59:37.021896   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:37.024668   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.025171   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:37.025207   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.025369   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:37.025591   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:37.025746   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:37.025892   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:37.026040   60829 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:37.026257   60829 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1216 20:59:37.026273   60829 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 20:59:37.140228   60829 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734382777.110726967
	
	I1216 20:59:37.140254   60829 fix.go:216] guest clock: 1734382777.110726967
	I1216 20:59:37.140264   60829 fix.go:229] Guest: 2024-12-16 20:59:37.110726967 +0000 UTC Remote: 2024-12-16 20:59:37.021877328 +0000 UTC m=+246.706572335 (delta=88.849639ms)
	I1216 20:59:37.140308   60829 fix.go:200] guest clock delta is within tolerance: 88.849639ms
	I1216 20:59:37.140315   60829 start.go:83] releasing machines lock for "default-k8s-diff-port-327790", held for 20.079880217s
	I1216 20:59:37.140347   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:59:37.140632   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetIP
	I1216 20:59:37.143268   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.143748   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:37.143775   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.143983   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:59:37.144601   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:59:37.144789   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 20:59:37.144883   60829 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 20:59:37.144930   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:37.145028   60829 ssh_runner.go:195] Run: cat /version.json
	I1216 20:59:37.145060   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 20:59:37.147817   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.148192   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:37.148219   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.148315   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.148364   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:37.148576   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:37.148755   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:37.148776   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:37.148804   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:37.148964   60829 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 20:59:37.149020   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 20:59:37.149141   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 20:59:37.149285   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 20:59:37.149439   60829 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 20:59:37.232354   60829 ssh_runner.go:195] Run: systemctl --version
	I1216 20:59:37.261803   60829 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 20:59:37.416094   60829 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 20:59:37.425458   60829 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 20:59:37.425566   60829 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 20:59:37.448873   60829 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 20:59:37.448914   60829 start.go:495] detecting cgroup driver to use...
	I1216 20:59:37.449014   60829 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 20:59:37.472474   60829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 20:59:37.492445   60829 docker.go:217] disabling cri-docker service (if available) ...
	I1216 20:59:37.492518   60829 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 20:59:37.510478   60829 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 20:59:37.525452   60829 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 20:59:37.642105   60829 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 20:59:37.814506   60829 docker.go:233] disabling docker service ...
	I1216 20:59:37.814590   60829 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 20:59:37.829046   60829 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 20:59:37.845049   60829 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 20:59:38.009931   60829 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 20:59:38.158000   60829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 20:59:38.174376   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 20:59:38.197489   60829 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1216 20:59:38.197555   60829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:38.213974   60829 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 20:59:38.214034   60829 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:38.230383   60829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:38.244599   60829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:38.257574   60829 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 20:59:38.273377   60829 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:38.285854   60829 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:38.312687   60829 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:38.329105   60829 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 20:59:38.343596   60829 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 20:59:38.343679   60829 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 20:59:38.362530   60829 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 20:59:38.374384   60829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:59:38.564793   60829 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 20:59:38.682792   60829 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 20:59:38.682873   60829 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 20:59:38.689164   60829 start.go:563] Will wait 60s for crictl version
	I1216 20:59:38.689251   60829 ssh_runner.go:195] Run: which crictl
	I1216 20:59:38.693994   60829 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 20:59:38.746808   60829 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 20:59:38.746913   60829 ssh_runner.go:195] Run: crio --version
	I1216 20:59:38.788490   60829 ssh_runner.go:195] Run: crio --version
	I1216 20:59:38.823957   60829 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1216 20:59:37.167470   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .Start
	I1216 20:59:37.167715   60933 main.go:141] libmachine: (old-k8s-version-847766) Ensuring networks are active...
	I1216 20:59:37.168626   60933 main.go:141] libmachine: (old-k8s-version-847766) Ensuring network default is active
	I1216 20:59:37.169114   60933 main.go:141] libmachine: (old-k8s-version-847766) Ensuring network mk-old-k8s-version-847766 is active
	I1216 20:59:37.169670   60933 main.go:141] libmachine: (old-k8s-version-847766) Getting domain xml...
	I1216 20:59:37.170345   60933 main.go:141] libmachine: (old-k8s-version-847766) Creating domain...
	I1216 20:59:38.535579   60933 main.go:141] libmachine: (old-k8s-version-847766) Waiting to get IP...
	I1216 20:59:38.536661   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:38.537089   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:38.537174   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:38.537078   61973 retry.go:31] will retry after 277.62307ms: waiting for machine to come up
	I1216 20:59:38.816788   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:38.817329   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:38.817360   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:38.817272   61973 retry.go:31] will retry after 346.694382ms: waiting for machine to come up
	I1216 20:59:39.165778   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:39.166377   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:39.166436   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:39.166355   61973 retry.go:31] will retry after 416.599295ms: waiting for machine to come up
	I1216 20:59:38.825413   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetIP
	I1216 20:59:38.828442   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:38.828836   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 20:59:38.828870   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 20:59:38.829125   60829 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1216 20:59:38.833715   60829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 20:59:38.848989   60829 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-327790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.32.0 ClusterName:default-k8s-diff-port-327790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 20:59:38.849121   60829 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 20:59:38.849169   60829 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 20:59:38.891356   60829 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1216 20:59:38.891432   60829 ssh_runner.go:195] Run: which lz4
	I1216 20:59:38.896669   60829 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 20:59:38.901209   60829 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 20:59:38.901253   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1216 20:59:37.928929   60421 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:59:38.428939   60421 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:59:38.454184   60421 api_server.go:72] duration metric: took 1.02597754s to wait for apiserver process to appear ...
	I1216 20:59:38.454211   60421 api_server.go:88] waiting for apiserver healthz status ...
	I1216 20:59:38.454252   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:38.454842   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": dial tcp 192.168.50.240:8443: connect: connection refused
	I1216 20:59:38.954378   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:39.585259   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:39.585762   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:39.585791   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:39.585737   61973 retry.go:31] will retry after 526.969594ms: waiting for machine to come up
	I1216 20:59:40.114653   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:40.115175   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:40.115205   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:40.115140   61973 retry.go:31] will retry after 502.283372ms: waiting for machine to come up
	I1216 20:59:40.619067   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:40.619633   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:40.619682   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:40.619571   61973 retry.go:31] will retry after 764.799982ms: waiting for machine to come up
	I1216 20:59:41.385515   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:41.386066   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:41.386100   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:41.386027   61973 retry.go:31] will retry after 982.237202ms: waiting for machine to come up
	I1216 20:59:42.369934   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:42.370414   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:42.370449   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:42.370373   61973 retry.go:31] will retry after 1.163280736s: waiting for machine to come up
	I1216 20:59:43.534829   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:43.535194   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:43.535224   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:43.535143   61973 retry.go:31] will retry after 1.630958514s: waiting for machine to come up
	I1216 20:59:40.539994   60829 crio.go:462] duration metric: took 1.643361409s to copy over tarball
	I1216 20:59:40.540066   60829 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 20:59:42.840346   60829 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.30025199s)
	I1216 20:59:42.840382   60829 crio.go:469] duration metric: took 2.300357568s to extract the tarball
	I1216 20:59:42.840392   60829 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 20:59:42.881650   60829 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 20:59:42.928089   60829 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 20:59:42.928120   60829 cache_images.go:84] Images are preloaded, skipping loading
	I1216 20:59:42.928129   60829 kubeadm.go:934] updating node { 192.168.39.162 8444 v1.32.0 crio true true} ...
	I1216 20:59:42.928222   60829 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-327790 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:default-k8s-diff-port-327790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 20:59:42.928286   60829 ssh_runner.go:195] Run: crio config
	I1216 20:59:42.983315   60829 cni.go:84] Creating CNI manager for ""
	I1216 20:59:42.983348   60829 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 20:59:42.983360   60829 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 20:59:42.983396   60829 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.162 APIServerPort:8444 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-327790 NodeName:default-k8s-diff-port-327790 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.162"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.162 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 20:59:42.983556   60829 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.162
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-327790"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.162"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.162"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 20:59:42.983631   60829 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1216 20:59:42.996192   60829 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 20:59:42.996283   60829 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 20:59:43.008389   60829 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1216 20:59:43.027984   60829 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 20:59:43.045672   60829 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1216 20:59:43.063620   60829 ssh_runner.go:195] Run: grep 192.168.39.162	control-plane.minikube.internal$ /etc/hosts
	I1216 20:59:43.067925   60829 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.162	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 20:59:43.082946   60829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:59:43.220929   60829 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 20:59:43.243843   60829 certs.go:68] Setting up /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790 for IP: 192.168.39.162
	I1216 20:59:43.243870   60829 certs.go:194] generating shared ca certs ...
	I1216 20:59:43.243888   60829 certs.go:226] acquiring lock for ca certs: {Name:mk7f8f83a04be3d39897a025f51d4d8228b5a509 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 20:59:43.244125   60829 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key
	I1216 20:59:43.244185   60829 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key
	I1216 20:59:43.244200   60829 certs.go:256] generating profile certs ...
	I1216 20:59:43.244324   60829 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/client.key
	I1216 20:59:43.244400   60829 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/apiserver.key.0f0bf709
	I1216 20:59:43.244449   60829 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/proxy-client.key
	I1216 20:59:43.244606   60829 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem (1338 bytes)
	W1216 20:59:43.244649   60829 certs.go:480] ignoring /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254_empty.pem, impossibly tiny 0 bytes
	I1216 20:59:43.244666   60829 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 20:59:43.244689   60829 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem (1082 bytes)
	I1216 20:59:43.244711   60829 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem (1123 bytes)
	I1216 20:59:43.244731   60829 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem (1679 bytes)
	I1216 20:59:43.244776   60829 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem (1708 bytes)
	I1216 20:59:43.245449   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 20:59:43.283598   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 20:59:43.309321   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 20:59:43.343071   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 20:59:43.379763   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1216 20:59:43.409794   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 20:59:43.437074   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 20:59:43.462616   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 20:59:43.487711   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 20:59:43.512636   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem --> /usr/share/ca-certificates/14254.pem (1338 bytes)
	I1216 20:59:43.539050   60829 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /usr/share/ca-certificates/142542.pem (1708 bytes)
	I1216 20:59:43.566507   60829 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 20:59:43.584425   60829 ssh_runner.go:195] Run: openssl version
	I1216 20:59:43.590996   60829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 20:59:43.604384   60829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:59:43.609342   60829 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:59:43.609404   60829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 20:59:43.615902   60829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 20:59:43.627432   60829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14254.pem && ln -fs /usr/share/ca-certificates/14254.pem /etc/ssl/certs/14254.pem"
	I1216 20:59:43.638929   60829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14254.pem
	I1216 20:59:43.644189   60829 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 19:42 /usr/share/ca-certificates/14254.pem
	I1216 20:59:43.644267   60829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14254.pem
	I1216 20:59:43.650550   60829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14254.pem /etc/ssl/certs/51391683.0"
	I1216 20:59:43.662678   60829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142542.pem && ln -fs /usr/share/ca-certificates/142542.pem /etc/ssl/certs/142542.pem"
	I1216 20:59:43.674981   60829 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142542.pem
	I1216 20:59:43.680022   60829 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 19:42 /usr/share/ca-certificates/142542.pem
	I1216 20:59:43.680113   60829 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142542.pem
	I1216 20:59:43.686159   60829 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142542.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 20:59:43.697897   60829 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 20:59:43.702835   60829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 20:59:43.709262   60829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 20:59:43.716370   60829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 20:59:43.725031   60829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 20:59:43.732876   60829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 20:59:43.739810   60829 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 20:59:43.746998   60829 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-327790 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.32.0 ClusterName:default-k8s-diff-port-327790 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 20:59:43.747131   60829 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 20:59:43.747189   60829 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 20:59:43.791895   60829 cri.go:89] found id: ""
	I1216 20:59:43.791979   60829 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 20:59:43.802858   60829 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1216 20:59:43.802886   60829 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1216 20:59:43.802943   60829 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 20:59:43.813313   60829 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 20:59:43.814296   60829 kubeconfig.go:125] found "default-k8s-diff-port-327790" server: "https://192.168.39.162:8444"
	I1216 20:59:43.816374   60829 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 20:59:43.825834   60829 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.162
	I1216 20:59:43.825871   60829 kubeadm.go:1160] stopping kube-system containers ...
	I1216 20:59:43.825884   60829 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 20:59:43.825934   60829 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 20:59:43.870890   60829 cri.go:89] found id: ""
	I1216 20:59:43.870965   60829 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 20:59:43.888155   60829 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 20:59:43.898356   60829 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 20:59:43.898381   60829 kubeadm.go:157] found existing configuration files:
	
	I1216 20:59:43.898445   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1216 20:59:43.908232   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 20:59:43.908310   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 20:59:43.918637   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1216 20:59:43.928255   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 20:59:43.928343   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 20:59:43.938479   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1216 20:59:43.948085   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 20:59:43.948157   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 20:59:43.959080   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1216 20:59:43.969218   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 20:59:43.969275   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 20:59:43.980063   60829 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 20:59:43.990768   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:44.125741   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:44.845177   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:45.049512   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:45.162055   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:45.284927   60829 api_server.go:52] waiting for apiserver process to appear ...
	I1216 20:59:45.285036   60829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:59:43.954985   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 20:59:43.955087   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:45.168144   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:45.168719   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:45.168750   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:45.168671   61973 retry.go:31] will retry after 1.835631107s: waiting for machine to come up
	I1216 20:59:47.005854   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:47.006380   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:47.006422   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:47.006339   61973 retry.go:31] will retry after 1.943800898s: waiting for machine to come up
	I1216 20:59:48.951552   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:48.952050   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:48.952114   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:48.952008   61973 retry.go:31] will retry after 2.949898251s: waiting for machine to come up
	I1216 20:59:45.785964   60829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:59:46.285989   60829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:59:46.339555   60829 api_server.go:72] duration metric: took 1.054628295s to wait for apiserver process to appear ...
	I1216 20:59:46.339597   60829 api_server.go:88] waiting for apiserver healthz status ...
	I1216 20:59:46.339636   60829 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1216 20:59:46.340197   60829 api_server.go:269] stopped: https://192.168.39.162:8444/healthz: Get "https://192.168.39.162:8444/healthz": dial tcp 192.168.39.162:8444: connect: connection refused
	I1216 20:59:46.839771   60829 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1216 20:59:49.461907   60829 api_server.go:279] https://192.168.39.162:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 20:59:49.461943   60829 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 20:59:49.461958   60829 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1216 20:59:49.513069   60829 api_server.go:279] https://192.168.39.162:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 20:59:49.513121   60829 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 20:59:49.840517   60829 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1216 20:59:49.846051   60829 api_server.go:279] https://192.168.39.162:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 20:59:49.846086   60829 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 20:59:50.339824   60829 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1216 20:59:50.347663   60829 api_server.go:279] https://192.168.39.162:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 20:59:50.347708   60829 api_server.go:103] status: https://192.168.39.162:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 20:59:50.840385   60829 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1216 20:59:50.844943   60829 api_server.go:279] https://192.168.39.162:8444/healthz returned 200:
	ok
	I1216 20:59:50.854518   60829 api_server.go:141] control plane version: v1.32.0
	I1216 20:59:50.854546   60829 api_server.go:131] duration metric: took 4.514941385s to wait for apiserver health ...
	I1216 20:59:50.854554   60829 cni.go:84] Creating CNI manager for ""
	I1216 20:59:50.854560   60829 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 20:59:50.856538   60829 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 20:59:48.956352   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 20:59:48.956414   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:51.905108   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:51.905560   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | unable to find current IP address of domain old-k8s-version-847766 in network mk-old-k8s-version-847766
	I1216 20:59:51.905594   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | I1216 20:59:51.905505   61973 retry.go:31] will retry after 3.44069953s: waiting for machine to come up
	I1216 20:59:50.858169   60829 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 20:59:50.882809   60829 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 20:59:50.912787   60829 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 20:59:50.933650   60829 system_pods.go:59] 8 kube-system pods found
	I1216 20:59:50.933693   60829 system_pods.go:61] "coredns-668d6bf9bc-tqh9s" [56b4db37-b6bc-49eb-b45f-b8b4d1f16eed] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 20:59:50.933705   60829 system_pods.go:61] "etcd-default-k8s-diff-port-327790" [067f7c41-3763-42d3-af06-ad50fad3d206] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 20:59:50.933713   60829 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-327790" [f1964b5b-9d2b-4f82-afc6-2f359c9b8827] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 20:59:50.933722   60829 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-327790" [fd7479e3-be26-4bb0-b53a-e40766a33996] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 20:59:50.933742   60829 system_pods.go:61] "kube-proxy-mplxr" [027abdc5-7022-4528-a93f-36f3b10115ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 20:59:50.933751   60829 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-327790" [d7416a53-ccb4-46fd-9992-46cbf7ec0a3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 20:59:50.933763   60829 system_pods.go:61] "metrics-server-f79f97bbb-hlt7s" [d42906e3-387c-493e-9d06-5bb654dc9784] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 20:59:50.933772   60829 system_pods.go:61] "storage-provisioner" [c774635a-faca-4a1a-8f4e-2161447ebaa1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 20:59:50.933785   60829 system_pods.go:74] duration metric: took 20.968988ms to wait for pod list to return data ...
	I1216 20:59:50.933804   60829 node_conditions.go:102] verifying NodePressure condition ...
	I1216 20:59:50.937958   60829 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 20:59:50.937986   60829 node_conditions.go:123] node cpu capacity is 2
	I1216 20:59:50.938008   60829 node_conditions.go:105] duration metric: took 4.196302ms to run NodePressure ...
	I1216 20:59:50.938030   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 20:59:51.231412   60829 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1216 20:59:51.236005   60829 kubeadm.go:739] kubelet initialised
	I1216 20:59:51.236029   60829 kubeadm.go:740] duration metric: took 4.585977ms waiting for restarted kubelet to initialise ...
	I1216 20:59:51.236042   60829 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 20:59:51.243608   60829 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-tqh9s" in "kube-system" namespace to be "Ready" ...
	I1216 20:59:53.250907   60829 pod_ready.go:103] pod "coredns-668d6bf9bc-tqh9s" in "kube-system" namespace has status "Ready":"False"
	I1216 20:59:56.696377   60215 start.go:364] duration metric: took 54.44579772s to acquireMachinesLock for "embed-certs-606219"
	I1216 20:59:56.696450   60215 start.go:96] Skipping create...Using existing machine configuration
	I1216 20:59:56.696470   60215 fix.go:54] fixHost starting: 
	I1216 20:59:56.696862   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 20:59:56.696902   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:59:56.714627   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42069
	I1216 20:59:56.715074   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:59:56.715599   60215 main.go:141] libmachine: Using API Version  1
	I1216 20:59:56.715629   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:59:56.715953   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:59:56.716116   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 20:59:56.716252   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetState
	I1216 20:59:56.717876   60215 fix.go:112] recreateIfNeeded on embed-certs-606219: state=Stopped err=<nil>
	I1216 20:59:56.717902   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	W1216 20:59:56.718088   60215 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 20:59:56.720072   60215 out.go:177] * Restarting existing kvm2 VM for "embed-certs-606219" ...
	I1216 20:59:53.957328   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 20:59:53.957395   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:55.349557   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.350105   60933 main.go:141] libmachine: (old-k8s-version-847766) Found IP for machine: 192.168.72.240
	I1216 20:59:55.350129   60933 main.go:141] libmachine: (old-k8s-version-847766) Reserving static IP address...
	I1216 20:59:55.350140   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has current primary IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.350574   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "old-k8s-version-847766", mac: "52:54:00:c4:f2:8a", ip: "192.168.72.240"} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.350608   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | skip adding static IP to network mk-old-k8s-version-847766 - found existing host DHCP lease matching {name: "old-k8s-version-847766", mac: "52:54:00:c4:f2:8a", ip: "192.168.72.240"}
	I1216 20:59:55.350623   60933 main.go:141] libmachine: (old-k8s-version-847766) Reserved static IP address: 192.168.72.240
	I1216 20:59:55.350646   60933 main.go:141] libmachine: (old-k8s-version-847766) Waiting for SSH to be available...
	I1216 20:59:55.350662   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | Getting to WaitForSSH function...
	I1216 20:59:55.353011   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.353346   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.353369   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.353535   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | Using SSH client type: external
	I1216 20:59:55.353560   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | Using SSH private key: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa (-rw-------)
	I1216 20:59:55.353592   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1216 20:59:55.353606   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | About to run SSH command:
	I1216 20:59:55.353621   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | exit 0
	I1216 20:59:55.480726   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | SSH cmd err, output: <nil>: 
	I1216 20:59:55.481062   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetConfigRaw
	I1216 20:59:55.481692   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetIP
	I1216 20:59:55.484113   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.484500   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.484537   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.484769   60933 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/config.json ...
	I1216 20:59:55.484985   60933 machine.go:93] provisionDockerMachine start ...
	I1216 20:59:55.485008   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:55.485220   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:55.487511   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.487835   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.487862   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.487958   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:55.488134   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:55.488268   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:55.488405   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:55.488546   60933 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:55.488725   60933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I1216 20:59:55.488735   60933 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 20:59:55.596092   60933 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1216 20:59:55.596127   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetMachineName
	I1216 20:59:55.596401   60933 buildroot.go:166] provisioning hostname "old-k8s-version-847766"
	I1216 20:59:55.596426   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetMachineName
	I1216 20:59:55.596644   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:55.599286   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.599631   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.599662   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.599814   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:55.600010   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:55.600166   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:55.600299   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:55.600462   60933 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:55.600665   60933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I1216 20:59:55.600678   60933 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-847766 && echo "old-k8s-version-847766" | sudo tee /etc/hostname
	I1216 20:59:55.731851   60933 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-847766
	
	I1216 20:59:55.731879   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:55.734802   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.735155   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.735186   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.735422   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:55.735650   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:55.735815   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:55.736030   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:55.736194   60933 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:55.736377   60933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I1216 20:59:55.736393   60933 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-847766' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-847766/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-847766' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 20:59:55.857050   60933 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 20:59:55.857108   60933 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20091-7083/.minikube CaCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20091-7083/.minikube}
	I1216 20:59:55.857138   60933 buildroot.go:174] setting up certificates
	I1216 20:59:55.857163   60933 provision.go:84] configureAuth start
	I1216 20:59:55.857180   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetMachineName
	I1216 20:59:55.857505   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetIP
	I1216 20:59:55.860286   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.860613   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.860643   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.860826   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:55.863292   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.863682   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:55.863709   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:55.863871   60933 provision.go:143] copyHostCerts
	I1216 20:59:55.863920   60933 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem, removing ...
	I1216 20:59:55.863929   60933 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem
	I1216 20:59:55.863986   60933 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem (1679 bytes)
	I1216 20:59:55.864069   60933 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem, removing ...
	I1216 20:59:55.864077   60933 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem
	I1216 20:59:55.864104   60933 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem (1082 bytes)
	I1216 20:59:55.864159   60933 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem, removing ...
	I1216 20:59:55.864177   60933 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem
	I1216 20:59:55.864202   60933 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem (1123 bytes)
	I1216 20:59:55.864250   60933 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-847766 san=[127.0.0.1 192.168.72.240 localhost minikube old-k8s-version-847766]
	I1216 20:59:56.058548   60933 provision.go:177] copyRemoteCerts
	I1216 20:59:56.058603   60933 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 20:59:56.058638   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:56.061354   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.061666   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.061707   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.061838   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:56.062039   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.062200   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:56.062356   60933 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa Username:docker}
	I1216 20:59:56.146788   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 20:59:56.172789   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1216 20:59:56.198040   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 20:59:56.222476   60933 provision.go:87] duration metric: took 365.299433ms to configureAuth
	I1216 20:59:56.222505   60933 buildroot.go:189] setting minikube options for container-runtime
	I1216 20:59:56.222706   60933 config.go:182] Loaded profile config "old-k8s-version-847766": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1216 20:59:56.222790   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:56.225376   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.225752   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.225779   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.225965   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:56.226182   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.226363   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.226516   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:56.226687   60933 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:56.226887   60933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I1216 20:59:56.226906   60933 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 20:59:56.451434   60933 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 20:59:56.451464   60933 machine.go:96] duration metric: took 966.463181ms to provisionDockerMachine
	I1216 20:59:56.451478   60933 start.go:293] postStartSetup for "old-k8s-version-847766" (driver="kvm2")
	I1216 20:59:56.451513   60933 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 20:59:56.451541   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:56.451926   60933 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 20:59:56.451980   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:56.454840   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.455302   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.455331   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.455454   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:56.455661   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.455814   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:56.455988   60933 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa Username:docker}
	I1216 20:59:56.542904   60933 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 20:59:56.547362   60933 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 20:59:56.547389   60933 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/addons for local assets ...
	I1216 20:59:56.547467   60933 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/files for local assets ...
	I1216 20:59:56.547568   60933 filesync.go:149] local asset: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem -> 142542.pem in /etc/ssl/certs
	I1216 20:59:56.547677   60933 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 20:59:56.557902   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /etc/ssl/certs/142542.pem (1708 bytes)
	I1216 20:59:56.582796   60933 start.go:296] duration metric: took 131.303406ms for postStartSetup
	I1216 20:59:56.582846   60933 fix.go:56] duration metric: took 19.442354832s for fixHost
	I1216 20:59:56.582872   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:56.585478   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.585803   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.585831   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.586011   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:56.586194   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.586358   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.586472   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:56.586640   60933 main.go:141] libmachine: Using SSH client type: native
	I1216 20:59:56.586809   60933 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.240 22 <nil> <nil>}
	I1216 20:59:56.586819   60933 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 20:59:56.696254   60933 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734382796.650794736
	
	I1216 20:59:56.696274   60933 fix.go:216] guest clock: 1734382796.650794736
	I1216 20:59:56.696281   60933 fix.go:229] Guest: 2024-12-16 20:59:56.650794736 +0000 UTC Remote: 2024-12-16 20:59:56.582851742 +0000 UTC m=+262.230512454 (delta=67.942994ms)
	I1216 20:59:56.696299   60933 fix.go:200] guest clock delta is within tolerance: 67.942994ms
	I1216 20:59:56.696304   60933 start.go:83] releasing machines lock for "old-k8s-version-847766", held for 19.555844424s
	I1216 20:59:56.696333   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:56.696608   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetIP
	I1216 20:59:56.699486   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.699932   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.699964   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.700068   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:56.700645   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:56.700846   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .DriverName
	I1216 20:59:56.700948   60933 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 20:59:56.701007   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:56.701115   60933 ssh_runner.go:195] Run: cat /version.json
	I1216 20:59:56.701140   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHHostname
	I1216 20:59:56.703937   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.704117   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.704314   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.704342   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.704496   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:56.704567   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:56.704601   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:56.704680   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.704762   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHPort
	I1216 20:59:56.704836   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:56.704982   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHKeyPath
	I1216 20:59:56.704987   60933 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa Username:docker}
	I1216 20:59:56.705134   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetSSHUsername
	I1216 20:59:56.705259   60933 sshutil.go:53] new ssh client: &{IP:192.168.72.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/old-k8s-version-847766/id_rsa Username:docker}
	I1216 20:59:56.784295   60933 ssh_runner.go:195] Run: systemctl --version
	I1216 20:59:56.817481   60933 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 20:59:56.968124   60933 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 20:59:56.979827   60933 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 20:59:56.979892   60933 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 20:59:56.997867   60933 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 20:59:56.997891   60933 start.go:495] detecting cgroup driver to use...
	I1216 20:59:56.997954   60933 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 20:59:57.016064   60933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 20:59:57.031596   60933 docker.go:217] disabling cri-docker service (if available) ...
	I1216 20:59:57.031665   60933 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 20:59:57.047562   60933 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 20:59:57.062737   60933 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 20:59:57.183918   60933 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 20:59:57.354699   60933 docker.go:233] disabling docker service ...
	I1216 20:59:57.354794   60933 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 20:59:57.373311   60933 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 20:59:57.390014   60933 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 20:59:57.523623   60933 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 20:59:57.656261   60933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 20:59:57.671374   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 20:59:57.692647   60933 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1216 20:59:57.692709   60933 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:57.704496   60933 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 20:59:57.704548   60933 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:57.715848   60933 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:57.727022   60933 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 20:59:57.738899   60933 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 20:59:57.756457   60933 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 20:59:57.773236   60933 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 20:59:57.773289   60933 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 20:59:57.789209   60933 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 20:59:57.800881   60933 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 20:59:57.927794   60933 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 20:59:58.038173   60933 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 20:59:58.038256   60933 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 20:59:58.044633   60933 start.go:563] Will wait 60s for crictl version
	I1216 20:59:58.044705   60933 ssh_runner.go:195] Run: which crictl
	I1216 20:59:58.048781   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 20:59:58.088449   60933 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 20:59:58.088579   60933 ssh_runner.go:195] Run: crio --version
	I1216 20:59:58.119211   60933 ssh_runner.go:195] Run: crio --version
	I1216 20:59:58.151411   60933 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1216 20:59:58.152582   60933 main.go:141] libmachine: (old-k8s-version-847766) Calling .GetIP
	I1216 20:59:58.155196   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:58.155558   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:f2:8a", ip: ""} in network mk-old-k8s-version-847766: {Iface:virbr4 ExpiryTime:2024-12-16 21:59:49 +0000 UTC Type:0 Mac:52:54:00:c4:f2:8a Iaid: IPaddr:192.168.72.240 Prefix:24 Hostname:old-k8s-version-847766 Clientid:01:52:54:00:c4:f2:8a}
	I1216 20:59:58.155587   60933 main.go:141] libmachine: (old-k8s-version-847766) DBG | domain old-k8s-version-847766 has defined IP address 192.168.72.240 and MAC address 52:54:00:c4:f2:8a in network mk-old-k8s-version-847766
	I1216 20:59:58.155763   60933 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1216 20:59:58.160369   60933 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 20:59:58.174013   60933 kubeadm.go:883] updating cluster {Name:old-k8s-version-847766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-847766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 20:59:58.174155   60933 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1216 20:59:58.174212   60933 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 20:59:58.226674   60933 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1216 20:59:58.226747   60933 ssh_runner.go:195] Run: which lz4
	I1216 20:59:58.231330   60933 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 20:59:58.236178   60933 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 20:59:58.236214   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1216 20:59:56.721746   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Start
	I1216 20:59:56.721946   60215 main.go:141] libmachine: (embed-certs-606219) Ensuring networks are active...
	I1216 20:59:56.722810   60215 main.go:141] libmachine: (embed-certs-606219) Ensuring network default is active
	I1216 20:59:56.723209   60215 main.go:141] libmachine: (embed-certs-606219) Ensuring network mk-embed-certs-606219 is active
	I1216 20:59:56.723644   60215 main.go:141] libmachine: (embed-certs-606219) Getting domain xml...
	I1216 20:59:56.724387   60215 main.go:141] libmachine: (embed-certs-606219) Creating domain...
	I1216 20:59:58.005906   60215 main.go:141] libmachine: (embed-certs-606219) Waiting to get IP...
	I1216 20:59:58.006646   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 20:59:58.007021   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 20:59:58.007136   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 20:59:58.007017   62108 retry.go:31] will retry after 280.124694ms: waiting for machine to come up
	I1216 20:59:58.288552   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 20:59:58.289049   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 20:59:58.289078   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 20:59:58.289013   62108 retry.go:31] will retry after 299.873899ms: waiting for machine to come up
	I1216 20:59:58.590757   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 20:59:58.591593   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 20:59:58.591625   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 20:59:58.591487   62108 retry.go:31] will retry after 486.884982ms: waiting for machine to come up
	I1216 20:59:59.079996   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 20:59:59.080618   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 20:59:59.080649   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 20:59:59.080581   62108 retry.go:31] will retry after 608.856993ms: waiting for machine to come up
	I1216 20:59:59.691549   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 20:59:59.692107   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 20:59:59.692139   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 20:59:59.692064   62108 retry.go:31] will retry after 730.774006ms: waiting for machine to come up
	I1216 20:59:55.752607   60829 pod_ready.go:103] pod "coredns-668d6bf9bc-tqh9s" in "kube-system" namespace has status "Ready":"False"
	I1216 20:59:58.251902   60829 pod_ready.go:103] pod "coredns-668d6bf9bc-tqh9s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:00.254126   60829 pod_ready.go:103] pod "coredns-668d6bf9bc-tqh9s" in "kube-system" namespace has status "Ready":"False"
	I1216 20:59:58.958114   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1216 20:59:58.958161   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:59.567722   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": read tcp 192.168.50.1:38738->192.168.50.240:8443: read: connection reset by peer
	I1216 20:59:59.567773   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:59.568271   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": dial tcp 192.168.50.240:8443: connect: connection refused
	I1216 20:59:59.954745   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 20:59:59.955447   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": dial tcp 192.168.50.240:8443: connect: connection refused
	I1216 21:00:00.455116   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:00.456036   60421 api_server.go:269] stopped: https://192.168.50.240:8443/healthz: Get "https://192.168.50.240:8443/healthz": dial tcp 192.168.50.240:8443: connect: connection refused
	I1216 21:00:00.954418   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:00.100507   60933 crio.go:462] duration metric: took 1.869217257s to copy over tarball
	I1216 21:00:00.100619   60933 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 21:00:03.581430   60933 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.480755636s)
	I1216 21:00:03.581469   60933 crio.go:469] duration metric: took 3.480924144s to extract the tarball
	I1216 21:00:03.581478   60933 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 21:00:03.627932   60933 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 21:00:03.667985   60933 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1216 21:00:03.668013   60933 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1216 21:00:03.668078   60933 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 21:00:03.668110   60933 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:03.668207   60933 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:03.668262   60933 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1216 21:00:03.668262   60933 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:03.668332   60933 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1216 21:00:03.668215   60933 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:03.668092   60933 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:03.670096   60933 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:03.670294   60933 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:03.670305   60933 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:03.670305   60933 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:03.670333   60933 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1216 21:00:03.670394   60933 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1216 21:00:03.670396   60933 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 21:00:03.670467   60933 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:03.861573   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1216 21:00:03.869704   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:03.885911   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:03.904748   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1216 21:00:03.905328   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:03.906138   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:03.936548   60933 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1216 21:00:03.936658   60933 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1216 21:00:03.936736   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.019039   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:04.033811   60933 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1216 21:00:04.033863   60933 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:04.033927   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.082946   60933 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1216 21:00:04.082995   60933 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:04.083008   60933 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1216 21:00:04.083050   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.083055   60933 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1216 21:00:04.083063   60933 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1216 21:00:04.083073   60933 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:04.083133   60933 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1216 21:00:04.083203   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1216 21:00:04.083205   60933 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:04.083306   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.083145   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.083139   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.123434   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:04.123702   60933 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1216 21:00:04.123740   60933 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:04.123786   60933 ssh_runner.go:195] Run: which crictl
	I1216 21:00:04.150878   60933 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 21:00:04.155586   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:04.155774   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:04.155877   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1216 21:00:04.155968   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:04.156205   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1216 21:00:04.226110   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:04.226429   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:00.424272   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:00.424766   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:00.424795   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:00.424712   62108 retry.go:31] will retry after 947.177724ms: waiting for machine to come up
	I1216 21:00:01.373798   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:01.374448   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:01.374486   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:01.374376   62108 retry.go:31] will retry after 755.735247ms: waiting for machine to come up
	I1216 21:00:02.132092   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:02.132690   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:02.132716   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:02.132636   62108 retry.go:31] will retry after 1.25933291s: waiting for machine to come up
	I1216 21:00:03.393390   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:03.393951   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:03.393987   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:03.393887   62108 retry.go:31] will retry after 1.654271195s: waiting for machine to come up
	I1216 21:00:00.768561   60829 pod_ready.go:93] pod "coredns-668d6bf9bc-tqh9s" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:00.768603   60829 pod_ready.go:82] duration metric: took 9.524968022s for pod "coredns-668d6bf9bc-tqh9s" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:00.768619   60829 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:02.778467   60829 pod_ready.go:93] pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:02.778507   60829 pod_ready.go:82] duration metric: took 2.009878604s for pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:02.778523   60829 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:03.290454   60829 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:03.290490   60829 pod_ready.go:82] duration metric: took 511.956426ms for pod "kube-apiserver-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:03.290505   60829 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:04.533609   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 21:00:04.533639   60421 api_server.go:103] status: https://192.168.50.240:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 21:00:04.533655   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:04.679801   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 21:00:04.679836   60421 api_server.go:103] status: https://192.168.50.240:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 21:00:04.955306   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:05.723827   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 21:00:05.723870   60421 api_server.go:103] status: https://192.168.50.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 21:00:05.723892   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:05.750638   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 21:00:05.750674   60421 api_server.go:103] status: https://192.168.50.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 21:00:05.955092   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:05.983280   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 21:00:05.983332   60421 api_server.go:103] status: https://192.168.50.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 21:00:06.454742   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:06.467886   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 21:00:06.467924   60421 api_server.go:103] status: https://192.168.50.240:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 21:00:06.954428   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:00:06.960039   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 200:
	ok
	I1216 21:00:06.969187   60421 api_server.go:141] control plane version: v1.32.0
	I1216 21:00:06.969231   60421 api_server.go:131] duration metric: took 28.515011952s to wait for apiserver health ...
	I1216 21:00:06.969242   60421 cni.go:84] Creating CNI manager for ""
	I1216 21:00:06.969249   60421 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:00:06.971475   60421 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 21:00:06.973035   60421 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 21:00:06.992348   60421 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 21:00:07.020819   60421 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 21:00:07.035254   60421 system_pods.go:59] 8 kube-system pods found
	I1216 21:00:07.035308   60421 system_pods.go:61] "coredns-668d6bf9bc-snhjf" [c0cf42c8-521a-4d02-9d43-ff7a700b0eca] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:00:07.035321   60421 system_pods.go:61] "etcd-no-preload-232338" [01ca2051-5953-44fd-bfff-40aa16ec7aca] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 21:00:07.035335   60421 system_pods.go:61] "kube-apiserver-no-preload-232338" [f1fbbb3b-a0e5-4200-89ef-67085e51a31d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 21:00:07.035359   60421 system_pods.go:61] "kube-controller-manager-no-preload-232338" [200039ad-1a2c-4dc4-8307-d8c882d69f1b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 21:00:07.035373   60421 system_pods.go:61] "kube-proxy-5mw2b" [8fbddf14-8697-451a-a3c7-873fdd437247] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 21:00:07.035382   60421 system_pods.go:61] "kube-scheduler-no-preload-232338" [1b9a7a43-59fc-44ba-9863-04fb90e6554f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 21:00:07.035396   60421 system_pods.go:61] "metrics-server-f79f97bbb-5xf67" [447144e5-11d8-48f7-b2fd-7ab9fb3c04de] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:00:07.035409   60421 system_pods.go:61] "storage-provisioner" [fb293bd2-f5be-4086-b821-ffd7df58dd5e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 21:00:07.035420   60421 system_pods.go:74] duration metric: took 14.571089ms to wait for pod list to return data ...
	I1216 21:00:07.035431   60421 node_conditions.go:102] verifying NodePressure condition ...
	I1216 21:00:07.044467   60421 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 21:00:07.044592   60421 node_conditions.go:123] node cpu capacity is 2
	I1216 21:00:07.044633   60421 node_conditions.go:105] duration metric: took 9.191874ms to run NodePressure ...
	I1216 21:00:07.044668   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:07.388388   60421 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1216 21:00:07.394851   60421 kubeadm.go:739] kubelet initialised
	I1216 21:00:07.394881   60421 kubeadm.go:740] duration metric: took 6.459945ms waiting for restarted kubelet to initialise ...
	I1216 21:00:07.394891   60421 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:00:07.401877   60421 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-snhjf" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:07.410697   60421 pod_ready.go:98] node "no-preload-232338" hosting pod "coredns-668d6bf9bc-snhjf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.410732   60421 pod_ready.go:82] duration metric: took 8.80876ms for pod "coredns-668d6bf9bc-snhjf" in "kube-system" namespace to be "Ready" ...
	E1216 21:00:07.410744   60421 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-232338" hosting pod "coredns-668d6bf9bc-snhjf" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.410755   60421 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:07.418118   60421 pod_ready.go:98] node "no-preload-232338" hosting pod "etcd-no-preload-232338" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.418149   60421 pod_ready.go:82] duration metric: took 7.383445ms for pod "etcd-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	E1216 21:00:07.418163   60421 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-232338" hosting pod "etcd-no-preload-232338" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.418172   60421 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:07.427341   60421 pod_ready.go:98] node "no-preload-232338" hosting pod "kube-apiserver-no-preload-232338" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.427414   60421 pod_ready.go:82] duration metric: took 9.234588ms for pod "kube-apiserver-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	E1216 21:00:07.427424   60421 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-232338" hosting pod "kube-apiserver-no-preload-232338" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.427432   60421 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:07.435329   60421 pod_ready.go:98] node "no-preload-232338" hosting pod "kube-controller-manager-no-preload-232338" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.435378   60421 pod_ready.go:82] duration metric: took 7.931923ms for pod "kube-controller-manager-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	E1216 21:00:07.435392   60421 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-232338" hosting pod "kube-controller-manager-no-preload-232338" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-232338" has status "Ready":"False"
	I1216 21:00:07.435408   60421 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5mw2b" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:04.457220   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:04.457399   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:04.457507   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:04.457596   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1216 21:00:04.457687   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1216 21:00:04.613834   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:04.613870   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1216 21:00:04.613923   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 21:00:04.613931   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1216 21:00:04.613960   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1216 21:00:04.613972   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1216 21:00:04.619915   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1216 21:00:04.791265   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1216 21:00:04.791297   60933 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1216 21:00:04.791315   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1216 21:00:04.791352   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1216 21:00:04.791366   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1216 21:00:04.791384   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1216 21:00:04.836463   60933 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1216 21:00:04.836536   60933 cache_images.go:92] duration metric: took 1.168508622s to LoadCachedImages
	W1216 21:00:04.836633   60933 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20091-7083/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I1216 21:00:04.836649   60933 kubeadm.go:934] updating node { 192.168.72.240 8443 v1.20.0 crio true true} ...
	I1216 21:00:04.836781   60933 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-847766 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-847766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 21:00:04.836877   60933 ssh_runner.go:195] Run: crio config
	I1216 21:00:04.898330   60933 cni.go:84] Creating CNI manager for ""
	I1216 21:00:04.898357   60933 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:00:04.898371   60933 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 21:00:04.898396   60933 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.240 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-847766 NodeName:old-k8s-version-847766 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1216 21:00:04.898560   60933 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-847766"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.240
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.240"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 21:00:04.898643   60933 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1216 21:00:04.910946   60933 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 21:00:04.911045   60933 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 21:00:04.923199   60933 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1216 21:00:04.942705   60933 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 21:00:04.976598   60933 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1216 21:00:05.001967   60933 ssh_runner.go:195] Run: grep 192.168.72.240	control-plane.minikube.internal$ /etc/hosts
	I1216 21:00:05.006819   60933 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.240	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 21:00:05.020604   60933 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 21:00:05.143039   60933 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 21:00:05.162507   60933 certs.go:68] Setting up /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766 for IP: 192.168.72.240
	I1216 21:00:05.162535   60933 certs.go:194] generating shared ca certs ...
	I1216 21:00:05.162554   60933 certs.go:226] acquiring lock for ca certs: {Name:mk7f8f83a04be3d39897a025f51d4d8228b5a509 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:00:05.162749   60933 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key
	I1216 21:00:05.162792   60933 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key
	I1216 21:00:05.162803   60933 certs.go:256] generating profile certs ...
	I1216 21:00:05.162907   60933 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/client.key
	I1216 21:00:05.162976   60933 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.key.6c8704df
	I1216 21:00:05.163011   60933 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/proxy-client.key
	I1216 21:00:05.163148   60933 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem (1338 bytes)
	W1216 21:00:05.163176   60933 certs.go:480] ignoring /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254_empty.pem, impossibly tiny 0 bytes
	I1216 21:00:05.163186   60933 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 21:00:05.163210   60933 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem (1082 bytes)
	I1216 21:00:05.163278   60933 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem (1123 bytes)
	I1216 21:00:05.163315   60933 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem (1679 bytes)
	I1216 21:00:05.163379   60933 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem (1708 bytes)
	I1216 21:00:05.164216   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 21:00:05.222491   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 21:00:05.253517   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 21:00:05.294338   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 21:00:05.342850   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1216 21:00:05.388068   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 21:00:05.422591   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 21:00:05.471916   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 21:00:05.505836   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem --> /usr/share/ca-certificates/14254.pem (1338 bytes)
	I1216 21:00:05.539404   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /usr/share/ca-certificates/142542.pem (1708 bytes)
	I1216 21:00:05.570819   60933 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 21:00:05.602079   60933 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 21:00:05.630577   60933 ssh_runner.go:195] Run: openssl version
	I1216 21:00:05.640017   60933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142542.pem && ln -fs /usr/share/ca-certificates/142542.pem /etc/ssl/certs/142542.pem"
	I1216 21:00:05.653759   60933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142542.pem
	I1216 21:00:05.659573   60933 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 19:42 /usr/share/ca-certificates/142542.pem
	I1216 21:00:05.659645   60933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142542.pem
	I1216 21:00:05.666667   60933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142542.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 21:00:05.680061   60933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 21:00:05.692776   60933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:00:05.698644   60933 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:00:05.698728   60933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:00:05.705913   60933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 21:00:05.730062   60933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14254.pem && ln -fs /usr/share/ca-certificates/14254.pem /etc/ssl/certs/14254.pem"
	I1216 21:00:05.750034   60933 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14254.pem
	I1216 21:00:05.757158   60933 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 19:42 /usr/share/ca-certificates/14254.pem
	I1216 21:00:05.757252   60933 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14254.pem
	I1216 21:00:05.765679   60933 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14254.pem /etc/ssl/certs/51391683.0"
	I1216 21:00:05.782537   60933 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 21:00:05.788291   60933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 21:00:05.797707   60933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 21:00:05.807016   60933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 21:00:05.818160   60933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 21:00:05.827428   60933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 21:00:05.836499   60933 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 21:00:05.846104   60933 kubeadm.go:392] StartCluster: {Name:old-k8s-version-847766 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-847766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.240 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 21:00:05.846244   60933 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 21:00:05.846331   60933 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 21:00:05.901274   60933 cri.go:89] found id: ""
	I1216 21:00:05.901376   60933 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 21:00:05.917353   60933 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1216 21:00:05.917381   60933 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1216 21:00:05.917439   60933 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 21:00:05.928587   60933 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 21:00:05.932546   60933 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-847766" does not appear in /home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 21:00:05.933844   60933 kubeconfig.go:62] /home/jenkins/minikube-integration/20091-7083/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-847766" cluster setting kubeconfig missing "old-k8s-version-847766" context setting]
	I1216 21:00:05.935400   60933 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/kubeconfig: {Name:mk67073c6dc9abd712825d4490d6430745897f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:00:05.938054   60933 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 21:00:05.950384   60933 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.240
	I1216 21:00:05.950433   60933 kubeadm.go:1160] stopping kube-system containers ...
	I1216 21:00:05.950450   60933 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 21:00:05.950515   60933 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 21:00:05.999495   60933 cri.go:89] found id: ""
	I1216 21:00:05.999588   60933 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 21:00:06.024765   60933 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:00:06.037807   60933 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:00:06.037836   60933 kubeadm.go:157] found existing configuration files:
	
	I1216 21:00:06.037894   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 21:00:06.048926   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:00:06.048997   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:00:06.060167   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 21:00:06.070841   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:00:06.070910   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:00:06.083517   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 21:00:06.099124   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:00:06.099214   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:00:06.110004   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 21:00:06.125600   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:00:06.125668   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:00:06.137212   60933 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 21:00:06.148873   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:06.316611   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:07.220187   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:07.549730   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:07.698864   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:07.815495   60933 api_server.go:52] waiting for apiserver process to appear ...
	I1216 21:00:07.815657   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:08.316003   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:08.816482   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:09.315805   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:05.050699   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:05.051378   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:05.051413   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:05.051296   62108 retry.go:31] will retry after 2.184829789s: waiting for machine to come up
	I1216 21:00:07.237618   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:07.238137   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:07.238166   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:07.238049   62108 retry.go:31] will retry after 2.531717629s: waiting for machine to come up
	I1216 21:00:05.713060   60829 pod_ready.go:103] pod "kube-controller-manager-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:05.798544   60829 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:05.798569   60829 pod_ready.go:82] duration metric: took 2.508055323s for pod "kube-controller-manager-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:05.798582   60829 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-mplxr" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:05.805322   60829 pod_ready.go:93] pod "kube-proxy-mplxr" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:05.805361   60829 pod_ready.go:82] duration metric: took 6.77ms for pod "kube-proxy-mplxr" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:05.805399   60829 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:05.812700   60829 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:05.812727   60829 pod_ready.go:82] duration metric: took 7.281992ms for pod "kube-scheduler-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:05.812741   60829 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:07.822004   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:10.321160   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:09.443582   60421 pod_ready.go:103] pod "kube-proxy-5mw2b" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:11.443796   60421 pod_ready.go:103] pod "kube-proxy-5mw2b" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:09.815863   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:10.316664   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:10.815852   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:11.316175   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:11.816446   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:12.316040   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:12.816172   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:13.316460   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:13.815700   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:14.316469   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:09.772318   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:09.772837   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:09.772869   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:09.772797   62108 retry.go:31] will retry after 2.557982234s: waiting for machine to come up
	I1216 21:00:12.331877   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:12.332340   60215 main.go:141] libmachine: (embed-certs-606219) DBG | unable to find current IP address of domain embed-certs-606219 in network mk-embed-certs-606219
	I1216 21:00:12.332368   60215 main.go:141] libmachine: (embed-certs-606219) DBG | I1216 21:00:12.332298   62108 retry.go:31] will retry after 4.202991569s: waiting for machine to come up
	I1216 21:00:12.322897   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:14.323015   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:13.942154   60421 pod_ready.go:103] pod "kube-proxy-5mw2b" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:16.442411   60421 pod_ready.go:103] pod "kube-proxy-5mw2b" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:14.816539   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:15.315737   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:15.816465   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:16.316470   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:16.816451   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:17.316485   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:17.816470   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:18.316165   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:18.816448   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:19.315972   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:16.539792   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.540299   60215 main.go:141] libmachine: (embed-certs-606219) Found IP for machine: 192.168.61.151
	I1216 21:00:16.540324   60215 main.go:141] libmachine: (embed-certs-606219) Reserving static IP address...
	I1216 21:00:16.540341   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has current primary IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.540771   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "embed-certs-606219", mac: "52:54:00:63:37:8f", ip: "192.168.61.151"} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:16.540810   60215 main.go:141] libmachine: (embed-certs-606219) DBG | skip adding static IP to network mk-embed-certs-606219 - found existing host DHCP lease matching {name: "embed-certs-606219", mac: "52:54:00:63:37:8f", ip: "192.168.61.151"}
	I1216 21:00:16.540827   60215 main.go:141] libmachine: (embed-certs-606219) Reserved static IP address: 192.168.61.151
	I1216 21:00:16.540839   60215 main.go:141] libmachine: (embed-certs-606219) Waiting for SSH to be available...
	I1216 21:00:16.540847   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Getting to WaitForSSH function...
	I1216 21:00:16.542958   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.543461   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:16.543503   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.543629   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Using SSH client type: external
	I1216 21:00:16.543663   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Using SSH private key: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa (-rw-------)
	I1216 21:00:16.543696   60215 main.go:141] libmachine: (embed-certs-606219) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.151 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1216 21:00:16.543713   60215 main.go:141] libmachine: (embed-certs-606219) DBG | About to run SSH command:
	I1216 21:00:16.543732   60215 main.go:141] libmachine: (embed-certs-606219) DBG | exit 0
	I1216 21:00:16.671576   60215 main.go:141] libmachine: (embed-certs-606219) DBG | SSH cmd err, output: <nil>: 
	I1216 21:00:16.671965   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetConfigRaw
	I1216 21:00:16.672599   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetIP
	I1216 21:00:16.675179   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.675520   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:16.675549   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.675726   60215 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/config.json ...
	I1216 21:00:16.675938   60215 machine.go:93] provisionDockerMachine start ...
	I1216 21:00:16.675955   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:00:16.676186   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:16.678481   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.678824   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:16.678846   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.679020   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:16.679203   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:16.679388   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:16.679530   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:16.679689   60215 main.go:141] libmachine: Using SSH client type: native
	I1216 21:00:16.679883   60215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I1216 21:00:16.679896   60215 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 21:00:16.791925   60215 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1216 21:00:16.791959   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetMachineName
	I1216 21:00:16.792224   60215 buildroot.go:166] provisioning hostname "embed-certs-606219"
	I1216 21:00:16.792261   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetMachineName
	I1216 21:00:16.792492   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:16.794967   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.795359   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:16.795388   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.795496   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:16.795674   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:16.795845   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:16.795995   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:16.796238   60215 main.go:141] libmachine: Using SSH client type: native
	I1216 21:00:16.796466   60215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I1216 21:00:16.796486   60215 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-606219 && echo "embed-certs-606219" | sudo tee /etc/hostname
	I1216 21:00:16.923887   60215 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-606219
	
	I1216 21:00:16.923922   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:16.926689   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.927228   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:16.927283   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:16.927500   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:16.927724   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:16.927943   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:16.928139   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:16.928396   60215 main.go:141] libmachine: Using SSH client type: native
	I1216 21:00:16.928574   60215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I1216 21:00:16.928590   60215 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-606219' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-606219/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-606219' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 21:00:17.045462   60215 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 21:00:17.045508   60215 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20091-7083/.minikube CaCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20091-7083/.minikube}
	I1216 21:00:17.045540   60215 buildroot.go:174] setting up certificates
	I1216 21:00:17.045560   60215 provision.go:84] configureAuth start
	I1216 21:00:17.045578   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetMachineName
	I1216 21:00:17.045889   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetIP
	I1216 21:00:17.048733   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.049038   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:17.049062   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.049216   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:17.051371   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.051713   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:17.051748   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.051861   60215 provision.go:143] copyHostCerts
	I1216 21:00:17.051940   60215 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem, removing ...
	I1216 21:00:17.051954   60215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem
	I1216 21:00:17.052033   60215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem (1082 bytes)
	I1216 21:00:17.052187   60215 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem, removing ...
	I1216 21:00:17.052203   60215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem
	I1216 21:00:17.052230   60215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem (1123 bytes)
	I1216 21:00:17.052306   60215 exec_runner.go:144] found /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem, removing ...
	I1216 21:00:17.052317   60215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem
	I1216 21:00:17.052342   60215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem (1679 bytes)
	I1216 21:00:17.052413   60215 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem org=jenkins.embed-certs-606219 san=[127.0.0.1 192.168.61.151 embed-certs-606219 localhost minikube]
	I1216 21:00:17.345020   60215 provision.go:177] copyRemoteCerts
	I1216 21:00:17.345079   60215 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 21:00:17.345116   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:17.348019   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.348323   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:17.348350   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.348554   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:17.348783   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:17.348931   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:17.349093   60215 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 21:00:17.434520   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1216 21:00:17.462097   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1216 21:00:17.488071   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 21:00:17.516428   60215 provision.go:87] duration metric: took 470.851303ms to configureAuth
	I1216 21:00:17.516461   60215 buildroot.go:189] setting minikube options for container-runtime
	I1216 21:00:17.516673   60215 config.go:182] Loaded profile config "embed-certs-606219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 21:00:17.516763   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:17.519637   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.519981   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:17.520019   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.520229   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:17.520451   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:17.520654   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:17.520813   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:17.520977   60215 main.go:141] libmachine: Using SSH client type: native
	I1216 21:00:17.521148   60215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I1216 21:00:17.521166   60215 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 21:00:17.787052   60215 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 21:00:17.787084   60215 machine.go:96] duration metric: took 1.111132885s to provisionDockerMachine
	I1216 21:00:17.787111   60215 start.go:293] postStartSetup for "embed-certs-606219" (driver="kvm2")
	I1216 21:00:17.787126   60215 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 21:00:17.787145   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:00:17.787551   60215 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 21:00:17.787588   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:17.790332   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.790710   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:17.790743   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.790891   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:17.791130   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:17.791336   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:17.791492   60215 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 21:00:17.881548   60215 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 21:00:17.886692   60215 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 21:00:17.886720   60215 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/addons for local assets ...
	I1216 21:00:17.886788   60215 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/files for local assets ...
	I1216 21:00:17.886886   60215 filesync.go:149] local asset: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem -> 142542.pem in /etc/ssl/certs
	I1216 21:00:17.886983   60215 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 21:00:17.897832   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /etc/ssl/certs/142542.pem (1708 bytes)
	I1216 21:00:17.926273   60215 start.go:296] duration metric: took 139.147156ms for postStartSetup
	I1216 21:00:17.926316   60215 fix.go:56] duration metric: took 21.229856025s for fixHost
	I1216 21:00:17.926338   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:17.929204   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.929600   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:17.929623   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:17.929809   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:17.930036   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:17.930220   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:17.930411   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:17.930554   60215 main.go:141] libmachine: Using SSH client type: native
	I1216 21:00:17.930723   60215 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.151 22 <nil> <nil>}
	I1216 21:00:17.930734   60215 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 21:00:18.040530   60215 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734382817.988837134
	
	I1216 21:00:18.040557   60215 fix.go:216] guest clock: 1734382817.988837134
	I1216 21:00:18.040590   60215 fix.go:229] Guest: 2024-12-16 21:00:17.988837134 +0000 UTC Remote: 2024-12-16 21:00:17.926320778 +0000 UTC m=+358.266755361 (delta=62.516356ms)
	I1216 21:00:18.040639   60215 fix.go:200] guest clock delta is within tolerance: 62.516356ms
	I1216 21:00:18.040650   60215 start.go:83] releasing machines lock for "embed-certs-606219", held for 21.34422537s
	I1216 21:00:18.040682   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:00:18.040997   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetIP
	I1216 21:00:18.044100   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:18.044549   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:18.044584   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:18.044727   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:00:18.045237   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:00:18.045454   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:00:18.045544   60215 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 21:00:18.045602   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:18.045673   60215 ssh_runner.go:195] Run: cat /version.json
	I1216 21:00:18.045702   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:00:18.048852   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:18.049066   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:18.049259   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:18.049285   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:18.049423   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:18.049578   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:18.049610   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:18.049611   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:18.049688   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:00:18.049885   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:00:18.049908   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:18.050090   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:00:18.050082   60215 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 21:00:18.050313   60215 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 21:00:18.128381   60215 ssh_runner.go:195] Run: systemctl --version
	I1216 21:00:18.165162   60215 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 21:00:18.313679   60215 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 21:00:18.321330   60215 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 21:00:18.321407   60215 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 21:00:18.340577   60215 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 21:00:18.340601   60215 start.go:495] detecting cgroup driver to use...
	I1216 21:00:18.340672   60215 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 21:00:18.357273   60215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 21:00:18.373169   60215 docker.go:217] disabling cri-docker service (if available) ...
	I1216 21:00:18.373231   60215 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 21:00:18.387904   60215 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 21:00:18.402499   60215 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 21:00:18.528830   60215 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 21:00:18.677746   60215 docker.go:233] disabling docker service ...
	I1216 21:00:18.677839   60215 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 21:00:18.693059   60215 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 21:00:18.707368   60215 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 21:00:18.870936   60215 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 21:00:19.011321   60215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 21:00:19.025645   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 21:00:19.045618   60215 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1216 21:00:19.045695   60215 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:00:19.056739   60215 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 21:00:19.056813   60215 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:00:19.067975   60215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:00:19.078954   60215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:00:19.090165   60215 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 21:00:19.101906   60215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:00:19.112949   60215 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:00:19.131186   60215 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 21:00:19.142238   60215 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 21:00:19.152768   60215 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 21:00:19.152830   60215 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 21:00:19.169166   60215 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 21:00:19.188991   60215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 21:00:19.319083   60215 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 21:00:19.427266   60215 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 21:00:19.427377   60215 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 21:00:19.432716   60215 start.go:563] Will wait 60s for crictl version
	I1216 21:00:19.432793   60215 ssh_runner.go:195] Run: which crictl
	I1216 21:00:19.437514   60215 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 21:00:19.484613   60215 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 21:00:19.484726   60215 ssh_runner.go:195] Run: crio --version
	I1216 21:00:19.519451   60215 ssh_runner.go:195] Run: crio --version
	I1216 21:00:19.555298   60215 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1216 21:00:19.556696   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetIP
	I1216 21:00:19.559802   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:19.560178   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:00:19.560201   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:00:19.560467   60215 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1216 21:00:19.565180   60215 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 21:00:19.579863   60215 kubeadm.go:883] updating cluster {Name:embed-certs-606219 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.32.0 ClusterName:embed-certs-606219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.151 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 21:00:19.579991   60215 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1216 21:00:19.580037   60215 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 21:00:19.618480   60215 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1216 21:00:19.618556   60215 ssh_runner.go:195] Run: which lz4
	I1216 21:00:19.622839   60215 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 21:00:19.627438   60215 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 21:00:19.627482   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1216 21:00:16.819610   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:19.326427   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:17.942107   60421 pod_ready.go:93] pod "kube-proxy-5mw2b" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:17.942148   60421 pod_ready.go:82] duration metric: took 10.506728599s for pod "kube-proxy-5mw2b" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:17.942161   60421 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:17.948518   60421 pod_ready.go:93] pod "kube-scheduler-no-preload-232338" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:17.948540   60421 pod_ready.go:82] duration metric: took 6.372903ms for pod "kube-scheduler-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:17.948549   60421 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:19.956992   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:21.957271   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:19.815807   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:20.316465   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:20.816461   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:21.316731   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:21.816637   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:22.315727   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:22.816447   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:23.316510   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:23.816408   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:24.316454   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:21.237863   60215 crio.go:462] duration metric: took 1.615059209s to copy over tarball
	I1216 21:00:21.237956   60215 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 21:00:23.572502   60215 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.33450798s)
	I1216 21:00:23.572535   60215 crio.go:469] duration metric: took 2.334633133s to extract the tarball
	I1216 21:00:23.572549   60215 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 21:00:23.613530   60215 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 21:00:23.667777   60215 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 21:00:23.667807   60215 cache_images.go:84] Images are preloaded, skipping loading
	I1216 21:00:23.667815   60215 kubeadm.go:934] updating node { 192.168.61.151 8443 v1.32.0 crio true true} ...
	I1216 21:00:23.667929   60215 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-606219 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.151
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:embed-certs-606219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 21:00:23.668009   60215 ssh_runner.go:195] Run: crio config
	I1216 21:00:23.716162   60215 cni.go:84] Creating CNI manager for ""
	I1216 21:00:23.716184   60215 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:00:23.716192   60215 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 21:00:23.716211   60215 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.151 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-606219 NodeName:embed-certs-606219 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.151"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.151 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 21:00:23.716337   60215 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.151
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-606219"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.151"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.151"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 21:00:23.716393   60215 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1216 21:00:23.727236   60215 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 21:00:23.727337   60215 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 21:00:23.737632   60215 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1216 21:00:23.757380   60215 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 21:00:23.774863   60215 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I1216 21:00:23.795070   60215 ssh_runner.go:195] Run: grep 192.168.61.151	control-plane.minikube.internal$ /etc/hosts
	I1216 21:00:23.799453   60215 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.151	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 21:00:23.814278   60215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 21:00:23.962200   60215 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 21:00:23.981947   60215 certs.go:68] Setting up /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219 for IP: 192.168.61.151
	I1216 21:00:23.981976   60215 certs.go:194] generating shared ca certs ...
	I1216 21:00:23.981999   60215 certs.go:226] acquiring lock for ca certs: {Name:mk7f8f83a04be3d39897a025f51d4d8228b5a509 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:00:23.982156   60215 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key
	I1216 21:00:23.982197   60215 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key
	I1216 21:00:23.982204   60215 certs.go:256] generating profile certs ...
	I1216 21:00:23.982280   60215 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/client.key
	I1216 21:00:23.982336   60215 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/apiserver.key.b346be49
	I1216 21:00:23.982376   60215 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/proxy-client.key
	I1216 21:00:23.982483   60215 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem (1338 bytes)
	W1216 21:00:23.982513   60215 certs.go:480] ignoring /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254_empty.pem, impossibly tiny 0 bytes
	I1216 21:00:23.982523   60215 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 21:00:23.982555   60215 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem (1082 bytes)
	I1216 21:00:23.982582   60215 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem (1123 bytes)
	I1216 21:00:23.982602   60215 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem (1679 bytes)
	I1216 21:00:23.982655   60215 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem (1708 bytes)
	I1216 21:00:23.983524   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 21:00:24.015369   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 21:00:24.043889   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 21:00:24.087807   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 21:00:24.137438   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1216 21:00:24.174859   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 21:00:24.200220   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 21:00:24.225811   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/embed-certs-606219/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 21:00:24.251567   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/ssl/certs/142542.pem --> /usr/share/ca-certificates/142542.pem (1708 bytes)
	I1216 21:00:24.276737   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 21:00:24.302541   60215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/14254.pem --> /usr/share/ca-certificates/14254.pem (1338 bytes)
	I1216 21:00:24.329876   60215 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 21:00:24.350133   60215 ssh_runner.go:195] Run: openssl version
	I1216 21:00:24.356984   60215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/142542.pem && ln -fs /usr/share/ca-certificates/142542.pem /etc/ssl/certs/142542.pem"
	I1216 21:00:24.371219   60215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/142542.pem
	I1216 21:00:24.376759   60215 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 19:42 /usr/share/ca-certificates/142542.pem
	I1216 21:00:24.376816   60215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/142542.pem
	I1216 21:00:24.383725   60215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/142542.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 21:00:24.397759   60215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 21:00:24.409836   60215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:00:24.414765   60215 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 19:35 /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:00:24.414836   60215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 21:00:24.421662   60215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 21:00:24.433843   60215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14254.pem && ln -fs /usr/share/ca-certificates/14254.pem /etc/ssl/certs/14254.pem"
	I1216 21:00:24.447839   60215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14254.pem
	I1216 21:00:24.453107   60215 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 19:42 /usr/share/ca-certificates/14254.pem
	I1216 21:00:24.453185   60215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14254.pem
	I1216 21:00:24.459472   60215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14254.pem /etc/ssl/certs/51391683.0"
	I1216 21:00:24.471714   60215 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 21:00:24.476881   60215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 21:00:24.486263   60215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 21:00:24.493146   60215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 21:00:24.500093   60215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 21:00:24.506599   60215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 21:00:24.512946   60215 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 21:00:24.519699   60215 kubeadm.go:392] StartCluster: {Name:embed-certs-606219 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32
.0 ClusterName:embed-certs-606219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.151 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 21:00:24.519780   60215 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 21:00:24.519861   60215 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 21:00:24.570867   60215 cri.go:89] found id: ""
	I1216 21:00:24.570952   60215 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 21:00:24.583857   60215 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1216 21:00:24.583887   60215 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1216 21:00:24.583943   60215 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 21:00:24.595709   60215 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 21:00:24.596734   60215 kubeconfig.go:125] found "embed-certs-606219" server: "https://192.168.61.151:8443"
	I1216 21:00:24.598569   60215 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 21:00:24.609876   60215 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.151
	I1216 21:00:24.609905   60215 kubeadm.go:1160] stopping kube-system containers ...
	I1216 21:00:24.609917   60215 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 21:00:24.609964   60215 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 21:00:24.654487   60215 cri.go:89] found id: ""
	I1216 21:00:24.654567   60215 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 21:00:24.676658   60215 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:00:24.689546   60215 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:00:24.689571   60215 kubeadm.go:157] found existing configuration files:
	
	I1216 21:00:24.689615   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 21:00:21.819876   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:23.820061   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:23.957368   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:26.556301   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:24.816467   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:25.315789   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:25.816410   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:26.316537   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:26.816144   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:27.316659   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:27.816126   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:28.316568   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:28.816151   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:29.316485   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:24.700928   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:00:24.701012   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:00:24.713438   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 21:00:24.725184   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:00:24.725257   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:00:24.737483   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 21:00:24.749488   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:00:24.749546   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:00:24.762322   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 21:00:24.774309   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:00:24.774391   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:00:24.787008   60215 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 21:00:24.798394   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:25.009799   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:25.917432   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:26.175602   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:26.279646   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:26.362472   60215 api_server.go:52] waiting for apiserver process to appear ...
	I1216 21:00:26.362564   60215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:26.862646   60215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:27.362663   60215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:27.421335   60215 api_server.go:72] duration metric: took 1.058863872s to wait for apiserver process to appear ...
	I1216 21:00:27.421361   60215 api_server.go:88] waiting for apiserver healthz status ...
	I1216 21:00:27.421380   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:00:27.421869   60215 api_server.go:269] stopped: https://192.168.61.151:8443/healthz: Get "https://192.168.61.151:8443/healthz": dial tcp 192.168.61.151:8443: connect: connection refused
	I1216 21:00:27.921493   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:00:26.471175   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:28.819200   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:30.365380   60215 api_server.go:279] https://192.168.61.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 21:00:30.365410   60215 api_server.go:103] status: https://192.168.61.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 21:00:30.365425   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:00:30.416044   60215 api_server.go:279] https://192.168.61.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 21:00:30.416078   60215 api_server.go:103] status: https://192.168.61.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 21:00:30.422219   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:00:30.432135   60215 api_server.go:279] https://192.168.61.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 21:00:30.432161   60215 api_server.go:103] status: https://192.168.61.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 21:00:30.921790   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:00:30.929160   60215 api_server.go:279] https://192.168.61.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 21:00:30.929192   60215 api_server.go:103] status: https://192.168.61.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 21:00:31.421708   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:00:31.432805   60215 api_server.go:279] https://192.168.61.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 21:00:31.432839   60215 api_server.go:103] status: https://192.168.61.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 21:00:31.922000   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:00:31.933658   60215 api_server.go:279] https://192.168.61.151:8443/healthz returned 200:
	ok
	I1216 21:00:31.945496   60215 api_server.go:141] control plane version: v1.32.0
	I1216 21:00:31.945534   60215 api_server.go:131] duration metric: took 4.524165612s to wait for apiserver health ...
	I1216 21:00:31.945546   60215 cni.go:84] Creating CNI manager for ""
	I1216 21:00:31.945555   60215 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:00:31.947456   60215 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 21:00:28.954572   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:30.955397   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:29.816510   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:30.315756   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:30.815774   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:31.316516   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:31.816503   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:32.316499   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:32.816455   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:33.316478   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:33.816363   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:34.316057   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:31.948727   60215 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 21:00:31.977877   60215 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 21:00:32.014745   60215 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 21:00:32.027268   60215 system_pods.go:59] 8 kube-system pods found
	I1216 21:00:32.027303   60215 system_pods.go:61] "coredns-668d6bf9bc-rp29f" [0135dcef-2324-49ec-b459-f34b73efd82b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:00:32.027311   60215 system_pods.go:61] "etcd-embed-certs-606219" [05f01ef3-5d92-4d16-9643-0f56df3869f6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 21:00:32.027320   60215 system_pods.go:61] "kube-apiserver-embed-certs-606219" [4294c469-e47a-4722-a620-92c33d23b41e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 21:00:32.027326   60215 system_pods.go:61] "kube-controller-manager-embed-certs-606219" [cc8452e6-ca00-44dd-8d77-897df20d37f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 21:00:32.027354   60215 system_pods.go:61] "kube-proxy-8t495" [492be5cc-7d3a-4983-9bc7-14091bef7b43] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 21:00:32.027362   60215 system_pods.go:61] "kube-scheduler-embed-certs-606219" [63c42d73-a17a-4b37-a585-f7db5923c493] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 21:00:32.027376   60215 system_pods.go:61] "metrics-server-f79f97bbb-d6gmd" [50916d48-ee33-4e96-9507-c486d8ac7f7d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:00:32.027387   60215 system_pods.go:61] "storage-provisioner" [1164651f-c3b5-445f-882a-60eb2f2fb3f8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 21:00:32.027399   60215 system_pods.go:74] duration metric: took 12.633182ms to wait for pod list to return data ...
	I1216 21:00:32.027409   60215 node_conditions.go:102] verifying NodePressure condition ...
	I1216 21:00:32.041648   60215 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 21:00:32.041677   60215 node_conditions.go:123] node cpu capacity is 2
	I1216 21:00:32.041686   60215 node_conditions.go:105] duration metric: took 14.27317ms to run NodePressure ...
	I1216 21:00:32.041704   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 21:00:32.492472   60215 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1216 21:00:32.504237   60215 kubeadm.go:739] kubelet initialised
	I1216 21:00:32.504271   60215 kubeadm.go:740] duration metric: took 11.772175ms waiting for restarted kubelet to initialise ...
	I1216 21:00:32.504282   60215 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:00:32.525531   60215 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:34.531954   60215 pod_ready.go:103] pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:31.321998   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:33.325288   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:32.959143   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:35.454928   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:37.455474   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:34.815839   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:35.316503   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:35.816590   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:36.316231   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:36.816011   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:37.316485   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:37.816494   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:38.316486   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:38.816475   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:39.315762   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:36.534516   60215 pod_ready.go:103] pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:39.032255   60215 pod_ready.go:103] pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:35.819575   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:38.322139   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:40.322804   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:39.456089   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:41.955128   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:39.816009   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:40.316444   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:40.816493   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:41.315869   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:41.816495   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:42.316034   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:42.816422   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:43.316432   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:43.815875   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:44.316036   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:41.032545   60215 pod_ready.go:103] pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:43.534471   60215 pod_ready.go:103] pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:42.819610   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:44.820561   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:43.955190   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:46.455540   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:44.816293   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:45.316458   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:45.815992   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:46.316054   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:46.816449   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:47.316113   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:47.816514   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:48.316353   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:48.816144   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:49.316435   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:45.031682   60215 pod_ready.go:93] pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:45.031705   60215 pod_ready.go:82] duration metric: took 12.506146086s for pod "coredns-668d6bf9bc-rp29f" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.031715   60215 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.038109   60215 pod_ready.go:93] pod "etcd-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:45.038138   60215 pod_ready.go:82] duration metric: took 6.416609ms for pod "etcd-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.038149   60215 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.043764   60215 pod_ready.go:93] pod "kube-apiserver-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:45.043784   60215 pod_ready.go:82] duration metric: took 5.621982ms for pod "kube-apiserver-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.043793   60215 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.053376   60215 pod_ready.go:93] pod "kube-controller-manager-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:45.053399   60215 pod_ready.go:82] duration metric: took 9.600142ms for pod "kube-controller-manager-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.053409   60215 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-8t495" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.058956   60215 pod_ready.go:93] pod "kube-proxy-8t495" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:45.058976   60215 pod_ready.go:82] duration metric: took 5.561188ms for pod "kube-proxy-8t495" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.058984   60215 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.429908   60215 pod_ready.go:93] pod "kube-scheduler-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:00:45.429932   60215 pod_ready.go:82] duration metric: took 370.942192ms for pod "kube-scheduler-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:45.429942   60215 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace to be "Ready" ...
	I1216 21:00:47.438759   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:47.323605   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:49.819763   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:48.456270   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:50.955190   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:49.815935   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:50.316437   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:50.816335   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:51.315747   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:51.816504   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:52.315695   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:52.816115   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:53.316498   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:53.816529   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:54.315689   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:49.935961   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:51.937245   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:53.937302   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:51.820266   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:53.820748   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:52.956645   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:55.456064   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:54.816019   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:55.316484   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:55.816517   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:56.315858   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:56.816306   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:57.316447   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:57.815879   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:58.316493   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:58.816395   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:59.316225   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:00:56.437390   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:58.938617   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:56.323619   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:58.820330   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:57.956401   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:00.456844   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:02.457677   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:00:59.816440   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:00.315769   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:00.816285   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:01.316020   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:01.818175   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:02.315780   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:02.816411   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:03.315758   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:03.815810   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:04.316731   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:01.436856   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:03.436945   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:00.820484   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:03.323328   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:04.955714   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:07.455361   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:04.816470   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:05.316528   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:05.815792   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:06.316491   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:06.815977   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:07.316002   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:07.816043   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:07.816114   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:07.861866   60933 cri.go:89] found id: ""
	I1216 21:01:07.861896   60933 logs.go:282] 0 containers: []
	W1216 21:01:07.861906   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:07.861913   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:07.861978   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:07.905674   60933 cri.go:89] found id: ""
	I1216 21:01:07.905700   60933 logs.go:282] 0 containers: []
	W1216 21:01:07.905707   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:07.905713   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:07.905798   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:07.949936   60933 cri.go:89] found id: ""
	I1216 21:01:07.949966   60933 logs.go:282] 0 containers: []
	W1216 21:01:07.949977   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:07.949984   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:07.950048   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:07.987196   60933 cri.go:89] found id: ""
	I1216 21:01:07.987223   60933 logs.go:282] 0 containers: []
	W1216 21:01:07.987232   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:07.987237   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:07.987341   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:08.033126   60933 cri.go:89] found id: ""
	I1216 21:01:08.033156   60933 logs.go:282] 0 containers: []
	W1216 21:01:08.033168   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:08.033176   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:08.033252   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:08.072223   60933 cri.go:89] found id: ""
	I1216 21:01:08.072257   60933 logs.go:282] 0 containers: []
	W1216 21:01:08.072270   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:08.072278   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:08.072345   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:08.117257   60933 cri.go:89] found id: ""
	I1216 21:01:08.117288   60933 logs.go:282] 0 containers: []
	W1216 21:01:08.117299   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:08.117319   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:08.117389   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:08.158059   60933 cri.go:89] found id: ""
	I1216 21:01:08.158096   60933 logs.go:282] 0 containers: []
	W1216 21:01:08.158106   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:08.158119   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:08.158133   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:08.232930   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:08.232966   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:08.277173   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:08.277204   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:08.331763   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:08.331802   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:08.346150   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:08.346178   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:08.488668   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:05.437627   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:07.938294   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:05.820491   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:07.821058   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:10.322630   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:09.456101   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:11.461923   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:10.989383   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:11.003162   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:11.003266   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:11.040432   60933 cri.go:89] found id: ""
	I1216 21:01:11.040464   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.040475   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:11.040483   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:11.040547   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:11.083083   60933 cri.go:89] found id: ""
	I1216 21:01:11.083110   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.083117   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:11.083122   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:11.083182   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:11.122842   60933 cri.go:89] found id: ""
	I1216 21:01:11.122880   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.122893   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:11.122900   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:11.122969   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:11.168227   60933 cri.go:89] found id: ""
	I1216 21:01:11.168268   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.168279   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:11.168286   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:11.168359   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:11.218660   60933 cri.go:89] found id: ""
	I1216 21:01:11.218689   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.218701   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:11.218708   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:11.218774   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:11.281179   60933 cri.go:89] found id: ""
	I1216 21:01:11.281214   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.281227   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:11.281236   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:11.281315   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:11.326419   60933 cri.go:89] found id: ""
	I1216 21:01:11.326453   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.326464   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:11.326472   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:11.326535   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:11.368825   60933 cri.go:89] found id: ""
	I1216 21:01:11.368863   60933 logs.go:282] 0 containers: []
	W1216 21:01:11.368875   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:11.368887   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:11.368905   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:11.454848   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:11.454869   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:11.454888   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:11.541685   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:11.541724   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:11.581804   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:11.581830   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:11.635800   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:11.635838   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:14.152441   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:14.167637   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:14.167720   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:14.206685   60933 cri.go:89] found id: ""
	I1216 21:01:14.206716   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.206728   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:14.206735   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:14.206796   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:14.248126   60933 cri.go:89] found id: ""
	I1216 21:01:14.248151   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.248159   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:14.248165   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:14.248215   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:14.285030   60933 cri.go:89] found id: ""
	I1216 21:01:14.285067   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.285079   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:14.285086   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:14.285151   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:14.325706   60933 cri.go:89] found id: ""
	I1216 21:01:14.325736   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.325747   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:14.325755   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:14.325820   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:14.369447   60933 cri.go:89] found id: ""
	I1216 21:01:14.369475   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.369486   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:14.369494   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:14.369557   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:10.437872   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:12.937013   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:12.820480   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:15.319910   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:13.959919   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:16.458101   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:14.407792   60933 cri.go:89] found id: ""
	I1216 21:01:14.407818   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.407826   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:14.407832   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:14.407890   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:14.448380   60933 cri.go:89] found id: ""
	I1216 21:01:14.448411   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.448419   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:14.448424   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:14.448473   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:14.487116   60933 cri.go:89] found id: ""
	I1216 21:01:14.487144   60933 logs.go:282] 0 containers: []
	W1216 21:01:14.487154   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:14.487164   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:14.487177   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:14.547342   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:14.547390   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:14.563385   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:14.563424   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:14.637363   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:14.637394   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:14.637410   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:14.715586   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:14.715626   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:17.258974   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:17.273896   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:17.273970   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:17.317359   60933 cri.go:89] found id: ""
	I1216 21:01:17.317394   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.317405   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:17.317412   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:17.317476   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:17.361422   60933 cri.go:89] found id: ""
	I1216 21:01:17.361451   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.361462   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:17.361469   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:17.361568   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:17.401466   60933 cri.go:89] found id: ""
	I1216 21:01:17.401522   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.401534   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:17.401544   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:17.401614   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:17.439560   60933 cri.go:89] found id: ""
	I1216 21:01:17.439588   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.439597   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:17.439603   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:17.439655   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:17.480310   60933 cri.go:89] found id: ""
	I1216 21:01:17.480333   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.480340   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:17.480345   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:17.480393   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:17.528562   60933 cri.go:89] found id: ""
	I1216 21:01:17.528589   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.528600   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:17.528607   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:17.528671   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:17.569863   60933 cri.go:89] found id: ""
	I1216 21:01:17.569900   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.569908   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:17.569914   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:17.569975   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:17.610840   60933 cri.go:89] found id: ""
	I1216 21:01:17.610867   60933 logs.go:282] 0 containers: []
	W1216 21:01:17.610875   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:17.610884   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:17.610895   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:17.661002   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:17.661041   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:17.675290   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:17.675318   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:17.743550   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:17.743572   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:17.743584   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:17.824479   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:17.824524   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:15.437260   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:17.937487   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:17.324337   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:19.819325   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:18.956605   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:20.957030   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:20.373687   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:20.389149   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:20.389244   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:20.429594   60933 cri.go:89] found id: ""
	I1216 21:01:20.429626   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.429634   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:20.429639   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:20.429693   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:20.473157   60933 cri.go:89] found id: ""
	I1216 21:01:20.473185   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.473193   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:20.473198   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:20.473264   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:20.512549   60933 cri.go:89] found id: ""
	I1216 21:01:20.512586   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.512597   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:20.512604   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:20.512676   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:20.549275   60933 cri.go:89] found id: ""
	I1216 21:01:20.549310   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.549323   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:20.549344   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:20.549408   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:20.587405   60933 cri.go:89] found id: ""
	I1216 21:01:20.587435   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.587443   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:20.587449   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:20.587515   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:20.625364   60933 cri.go:89] found id: ""
	I1216 21:01:20.625393   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.625400   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:20.625406   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:20.625456   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:20.664018   60933 cri.go:89] found id: ""
	I1216 21:01:20.664050   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.664061   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:20.664068   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:20.664117   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:20.703860   60933 cri.go:89] found id: ""
	I1216 21:01:20.703890   60933 logs.go:282] 0 containers: []
	W1216 21:01:20.703898   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:20.703906   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:20.703918   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:20.754433   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:20.754470   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:20.770136   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:20.770172   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:20.854025   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:20.854049   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:20.854061   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:20.939628   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:20.939661   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:23.489645   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:23.503603   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:23.503667   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:23.543044   60933 cri.go:89] found id: ""
	I1216 21:01:23.543070   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.543077   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:23.543083   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:23.543131   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:23.580333   60933 cri.go:89] found id: ""
	I1216 21:01:23.580362   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.580371   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:23.580377   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:23.580428   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:23.616732   60933 cri.go:89] found id: ""
	I1216 21:01:23.616766   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.616778   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:23.616785   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:23.616834   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:23.655771   60933 cri.go:89] found id: ""
	I1216 21:01:23.655793   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.655801   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:23.655807   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:23.655861   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:23.694400   60933 cri.go:89] found id: ""
	I1216 21:01:23.694430   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.694437   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:23.694443   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:23.694500   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:23.732592   60933 cri.go:89] found id: ""
	I1216 21:01:23.732622   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.732630   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:23.732636   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:23.732688   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:23.769752   60933 cri.go:89] found id: ""
	I1216 21:01:23.769787   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.769801   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:23.769810   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:23.769892   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:23.806891   60933 cri.go:89] found id: ""
	I1216 21:01:23.806925   60933 logs.go:282] 0 containers: []
	W1216 21:01:23.806936   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:23.806947   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:23.806963   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:23.822887   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:23.822912   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:23.898795   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:23.898817   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:23.898830   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:23.978036   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:23.978073   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:24.032500   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:24.032528   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:20.437888   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:22.936895   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:21.819859   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:23.820383   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:23.456331   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:25.960513   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:26.585937   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:26.599470   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:26.599543   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:26.635421   60933 cri.go:89] found id: ""
	I1216 21:01:26.635446   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.635455   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:26.635461   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:26.635527   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:26.675347   60933 cri.go:89] found id: ""
	I1216 21:01:26.675379   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.675390   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:26.675397   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:26.675464   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:26.715444   60933 cri.go:89] found id: ""
	I1216 21:01:26.715469   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.715480   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:26.715541   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:26.715619   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:26.753841   60933 cri.go:89] found id: ""
	I1216 21:01:26.753874   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.753893   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:26.753901   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:26.753963   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:26.791427   60933 cri.go:89] found id: ""
	I1216 21:01:26.791453   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.791464   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:26.791473   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:26.791539   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:26.832772   60933 cri.go:89] found id: ""
	I1216 21:01:26.832804   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.832816   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:26.832823   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:26.832887   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:26.869963   60933 cri.go:89] found id: ""
	I1216 21:01:26.869990   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.869997   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:26.870003   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:26.870068   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:26.906792   60933 cri.go:89] found id: ""
	I1216 21:01:26.906821   60933 logs.go:282] 0 containers: []
	W1216 21:01:26.906862   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:26.906875   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:26.906894   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:26.994820   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:26.994863   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:27.034642   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:27.034686   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:27.089128   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:27.089168   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:27.104368   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:27.104401   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:27.179852   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:25.436696   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:27.937229   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:26.319568   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:28.820132   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:28.454880   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:30.455734   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:29.681052   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:29.695376   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:29.695464   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:29.735562   60933 cri.go:89] found id: ""
	I1216 21:01:29.735588   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.735596   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:29.735602   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:29.735650   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:29.772635   60933 cri.go:89] found id: ""
	I1216 21:01:29.772663   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.772672   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:29.772678   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:29.772737   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:29.810471   60933 cri.go:89] found id: ""
	I1216 21:01:29.810499   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.810509   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:29.810516   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:29.810575   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:29.845917   60933 cri.go:89] found id: ""
	I1216 21:01:29.845952   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.845966   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:29.845975   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:29.846048   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:29.883866   60933 cri.go:89] found id: ""
	I1216 21:01:29.883892   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.883900   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:29.883906   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:29.883968   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:29.920696   60933 cri.go:89] found id: ""
	I1216 21:01:29.920729   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.920740   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:29.920748   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:29.920831   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:29.957977   60933 cri.go:89] found id: ""
	I1216 21:01:29.958056   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.958069   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:29.958079   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:29.958144   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:29.995436   60933 cri.go:89] found id: ""
	I1216 21:01:29.995464   60933 logs.go:282] 0 containers: []
	W1216 21:01:29.995472   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:29.995481   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:29.995492   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:30.046819   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:30.046859   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:30.062754   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:30.062807   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:30.138932   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:30.138959   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:30.138975   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:30.225720   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:30.225768   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:32.768185   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:32.782642   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:32.782729   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:32.821995   60933 cri.go:89] found id: ""
	I1216 21:01:32.822029   60933 logs.go:282] 0 containers: []
	W1216 21:01:32.822040   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:32.822048   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:32.822112   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:32.858453   60933 cri.go:89] found id: ""
	I1216 21:01:32.858487   60933 logs.go:282] 0 containers: []
	W1216 21:01:32.858497   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:32.858504   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:32.858570   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:32.896269   60933 cri.go:89] found id: ""
	I1216 21:01:32.896304   60933 logs.go:282] 0 containers: []
	W1216 21:01:32.896316   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:32.896323   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:32.896384   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:32.936795   60933 cri.go:89] found id: ""
	I1216 21:01:32.936820   60933 logs.go:282] 0 containers: []
	W1216 21:01:32.936832   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:32.936838   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:32.936904   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:32.974779   60933 cri.go:89] found id: ""
	I1216 21:01:32.974810   60933 logs.go:282] 0 containers: []
	W1216 21:01:32.974821   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:32.974828   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:32.974892   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:33.012201   60933 cri.go:89] found id: ""
	I1216 21:01:33.012226   60933 logs.go:282] 0 containers: []
	W1216 21:01:33.012234   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:33.012239   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:33.012287   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:33.049777   60933 cri.go:89] found id: ""
	I1216 21:01:33.049803   60933 logs.go:282] 0 containers: []
	W1216 21:01:33.049811   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:33.049816   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:33.049873   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:33.087820   60933 cri.go:89] found id: ""
	I1216 21:01:33.087851   60933 logs.go:282] 0 containers: []
	W1216 21:01:33.087859   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:33.087870   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:33.087885   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:33.140816   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:33.140854   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:33.154817   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:33.154855   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:33.231445   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:33.231474   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:33.231496   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:33.311547   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:33.311586   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:29.938045   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:32.436934   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:34.444209   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:31.321180   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:33.324091   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:32.956028   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:35.454994   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:37.455094   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:35.855686   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:35.870404   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:35.870485   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:35.908175   60933 cri.go:89] found id: ""
	I1216 21:01:35.908204   60933 logs.go:282] 0 containers: []
	W1216 21:01:35.908215   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:35.908222   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:35.908284   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:35.955456   60933 cri.go:89] found id: ""
	I1216 21:01:35.955483   60933 logs.go:282] 0 containers: []
	W1216 21:01:35.955494   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:35.955501   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:35.955562   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:35.995170   60933 cri.go:89] found id: ""
	I1216 21:01:35.995201   60933 logs.go:282] 0 containers: []
	W1216 21:01:35.995211   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:35.995218   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:35.995305   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:36.033729   60933 cri.go:89] found id: ""
	I1216 21:01:36.033758   60933 logs.go:282] 0 containers: []
	W1216 21:01:36.033769   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:36.033776   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:36.033840   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:36.072756   60933 cri.go:89] found id: ""
	I1216 21:01:36.072787   60933 logs.go:282] 0 containers: []
	W1216 21:01:36.072799   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:36.072806   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:36.072873   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:36.112149   60933 cri.go:89] found id: ""
	I1216 21:01:36.112187   60933 logs.go:282] 0 containers: []
	W1216 21:01:36.112198   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:36.112205   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:36.112258   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:36.148742   60933 cri.go:89] found id: ""
	I1216 21:01:36.148770   60933 logs.go:282] 0 containers: []
	W1216 21:01:36.148781   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:36.148789   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:36.148855   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:36.192827   60933 cri.go:89] found id: ""
	I1216 21:01:36.192864   60933 logs.go:282] 0 containers: []
	W1216 21:01:36.192875   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:36.192886   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:36.192901   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:36.243822   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:36.243867   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:36.258258   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:36.258292   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:36.342847   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:36.342876   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:36.342891   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:36.424741   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:36.424780   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:38.967334   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:38.982208   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:38.982283   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:39.023903   60933 cri.go:89] found id: ""
	I1216 21:01:39.023931   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.023939   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:39.023945   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:39.023997   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:39.070314   60933 cri.go:89] found id: ""
	I1216 21:01:39.070342   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.070351   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:39.070359   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:39.070423   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:39.115081   60933 cri.go:89] found id: ""
	I1216 21:01:39.115106   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.115113   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:39.115119   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:39.115178   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:39.151933   60933 cri.go:89] found id: ""
	I1216 21:01:39.151959   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.151967   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:39.151972   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:39.152022   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:39.192280   60933 cri.go:89] found id: ""
	I1216 21:01:39.192307   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.192315   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:39.192322   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:39.192370   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:39.228792   60933 cri.go:89] found id: ""
	I1216 21:01:39.228814   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.228822   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:39.228827   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:39.228887   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:39.266823   60933 cri.go:89] found id: ""
	I1216 21:01:39.266847   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.266854   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:39.266860   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:39.266908   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:39.301317   60933 cri.go:89] found id: ""
	I1216 21:01:39.301340   60933 logs.go:282] 0 containers: []
	W1216 21:01:39.301348   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:39.301361   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:39.301372   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:39.386615   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:39.386663   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:36.936376   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:38.936968   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:35.820025   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:37.820396   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:40.319915   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:39.457790   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:41.955758   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:39.433079   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:39.433112   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:39.489422   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:39.489458   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:39.504223   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:39.504259   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:39.587898   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:42.088900   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:42.103768   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:42.103854   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:42.141956   60933 cri.go:89] found id: ""
	I1216 21:01:42.142026   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.142040   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:42.142049   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:42.142117   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:42.178754   60933 cri.go:89] found id: ""
	I1216 21:01:42.178782   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.178818   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:42.178833   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:42.178891   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:42.215872   60933 cri.go:89] found id: ""
	I1216 21:01:42.215905   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.215916   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:42.215923   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:42.215991   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:42.253854   60933 cri.go:89] found id: ""
	I1216 21:01:42.253885   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.253896   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:42.253904   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:42.253972   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:42.290963   60933 cri.go:89] found id: ""
	I1216 21:01:42.291008   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.291023   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:42.291039   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:42.291109   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:42.332920   60933 cri.go:89] found id: ""
	I1216 21:01:42.332946   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.332953   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:42.332959   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:42.333006   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:42.375060   60933 cri.go:89] found id: ""
	I1216 21:01:42.375093   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.375104   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:42.375112   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:42.375189   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:42.416593   60933 cri.go:89] found id: ""
	I1216 21:01:42.416621   60933 logs.go:282] 0 containers: []
	W1216 21:01:42.416631   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:42.416639   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:42.416651   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:42.475204   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:42.475260   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:42.491022   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:42.491057   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:42.566645   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:42.566672   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:42.566687   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:42.646815   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:42.646856   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:41.436872   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:43.936734   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:42.321709   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:44.321985   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:43.955807   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:46.455508   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:45.191912   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:45.205487   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:45.205548   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:45.245350   60933 cri.go:89] found id: ""
	I1216 21:01:45.245389   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.245397   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:45.245404   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:45.245482   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:45.302126   60933 cri.go:89] found id: ""
	I1216 21:01:45.302158   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.302171   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:45.302178   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:45.302251   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:45.342888   60933 cri.go:89] found id: ""
	I1216 21:01:45.342917   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.342932   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:45.342937   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:45.342990   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:45.381545   60933 cri.go:89] found id: ""
	I1216 21:01:45.381574   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.381585   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:45.381593   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:45.381652   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:45.418081   60933 cri.go:89] found id: ""
	I1216 21:01:45.418118   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.418131   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:45.418138   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:45.418207   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:45.458610   60933 cri.go:89] found id: ""
	I1216 21:01:45.458637   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.458647   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:45.458655   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:45.458713   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:45.500102   60933 cri.go:89] found id: ""
	I1216 21:01:45.500137   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.500148   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:45.500155   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:45.500217   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:45.542074   60933 cri.go:89] found id: ""
	I1216 21:01:45.542103   60933 logs.go:282] 0 containers: []
	W1216 21:01:45.542113   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:45.542122   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:45.542134   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:45.597577   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:45.597614   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:45.614028   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:45.614075   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:45.693014   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:45.693039   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:45.693056   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:45.772260   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:45.772295   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:48.317073   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:48.332176   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:48.332242   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:48.369946   60933 cri.go:89] found id: ""
	I1216 21:01:48.369976   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.369988   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:48.369994   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:48.370059   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:48.407628   60933 cri.go:89] found id: ""
	I1216 21:01:48.407661   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.407672   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:48.407680   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:48.407742   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:48.444377   60933 cri.go:89] found id: ""
	I1216 21:01:48.444403   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.444411   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:48.444416   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:48.444467   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:48.485674   60933 cri.go:89] found id: ""
	I1216 21:01:48.485710   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.485722   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:48.485730   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:48.485785   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:48.530577   60933 cri.go:89] found id: ""
	I1216 21:01:48.530610   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.530621   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:48.530628   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:48.530693   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:48.567128   60933 cri.go:89] found id: ""
	I1216 21:01:48.567151   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.567159   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:48.567165   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:48.567216   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:48.603294   60933 cri.go:89] found id: ""
	I1216 21:01:48.603320   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.603327   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:48.603333   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:48.603392   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:48.646221   60933 cri.go:89] found id: ""
	I1216 21:01:48.646253   60933 logs.go:282] 0 containers: []
	W1216 21:01:48.646265   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:48.646288   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:48.646318   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:48.697589   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:48.697624   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:48.711916   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:48.711947   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:48.789068   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:48.789097   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:48.789113   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:48.872340   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:48.872378   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:45.937806   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:48.437160   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:46.819986   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:48.821079   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:48.456975   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:50.956101   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:51.418176   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:51.434851   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:51.434948   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:51.478935   60933 cri.go:89] found id: ""
	I1216 21:01:51.478963   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.478975   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:51.478982   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:51.479043   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:51.524581   60933 cri.go:89] found id: ""
	I1216 21:01:51.524611   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.524622   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:51.524629   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:51.524686   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:51.563479   60933 cri.go:89] found id: ""
	I1216 21:01:51.563507   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.563516   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:51.563521   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:51.563578   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:51.601931   60933 cri.go:89] found id: ""
	I1216 21:01:51.601964   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.601975   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:51.601982   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:51.602044   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:51.638984   60933 cri.go:89] found id: ""
	I1216 21:01:51.639014   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.639025   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:51.639032   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:51.639093   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:51.681137   60933 cri.go:89] found id: ""
	I1216 21:01:51.681167   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.681178   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:51.681185   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:51.681263   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:51.722904   60933 cri.go:89] found id: ""
	I1216 21:01:51.722932   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.722941   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:51.722946   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:51.722994   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:51.794403   60933 cri.go:89] found id: ""
	I1216 21:01:51.794434   60933 logs.go:282] 0 containers: []
	W1216 21:01:51.794444   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:51.794453   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:51.794464   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:51.850688   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:51.850724   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:51.866049   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:51.866079   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:51.949844   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:51.949880   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:51.949894   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:52.028981   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:52.029023   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:50.936202   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:52.936839   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:51.321959   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:53.819864   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:53.455360   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:55.954957   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:54.570192   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:54.585405   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:54.585489   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:54.627670   60933 cri.go:89] found id: ""
	I1216 21:01:54.627701   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.627712   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:54.627719   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:54.627782   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:54.671226   60933 cri.go:89] found id: ""
	I1216 21:01:54.671265   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.671276   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:54.671283   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:54.671337   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:54.705549   60933 cri.go:89] found id: ""
	I1216 21:01:54.705581   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.705592   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:54.705600   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:54.705663   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:54.743638   60933 cri.go:89] found id: ""
	I1216 21:01:54.743664   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.743671   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:54.743677   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:54.743728   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:54.781714   60933 cri.go:89] found id: ""
	I1216 21:01:54.781750   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.781760   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:54.781767   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:54.781831   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:54.830808   60933 cri.go:89] found id: ""
	I1216 21:01:54.830840   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.830851   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:54.830859   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:54.830923   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:54.868539   60933 cri.go:89] found id: ""
	I1216 21:01:54.868565   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.868573   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:54.868578   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:54.868626   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:54.906554   60933 cri.go:89] found id: ""
	I1216 21:01:54.906587   60933 logs.go:282] 0 containers: []
	W1216 21:01:54.906595   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:54.906604   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:54.906617   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:54.960664   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:54.960696   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:54.975657   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:54.975686   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:55.052266   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:55.052293   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:55.052320   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:55.137894   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:55.137937   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:57.682769   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:01:57.699102   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:01:57.699184   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:01:57.764651   60933 cri.go:89] found id: ""
	I1216 21:01:57.764684   60933 logs.go:282] 0 containers: []
	W1216 21:01:57.764692   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:01:57.764698   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:01:57.764755   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:01:57.805358   60933 cri.go:89] found id: ""
	I1216 21:01:57.805385   60933 logs.go:282] 0 containers: []
	W1216 21:01:57.805395   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:01:57.805404   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:01:57.805474   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:01:57.843589   60933 cri.go:89] found id: ""
	I1216 21:01:57.843623   60933 logs.go:282] 0 containers: []
	W1216 21:01:57.843634   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:01:57.843644   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:01:57.843716   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:01:57.881725   60933 cri.go:89] found id: ""
	I1216 21:01:57.881748   60933 logs.go:282] 0 containers: []
	W1216 21:01:57.881756   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:01:57.881761   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:01:57.881811   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:01:57.922252   60933 cri.go:89] found id: ""
	I1216 21:01:57.922293   60933 logs.go:282] 0 containers: []
	W1216 21:01:57.922305   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:01:57.922322   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:01:57.922385   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:01:57.962532   60933 cri.go:89] found id: ""
	I1216 21:01:57.962555   60933 logs.go:282] 0 containers: []
	W1216 21:01:57.962562   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:01:57.962567   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:01:57.962615   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:01:58.002021   60933 cri.go:89] found id: ""
	I1216 21:01:58.002056   60933 logs.go:282] 0 containers: []
	W1216 21:01:58.002067   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:01:58.002074   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:01:58.002137   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:01:58.035648   60933 cri.go:89] found id: ""
	I1216 21:01:58.035672   60933 logs.go:282] 0 containers: []
	W1216 21:01:58.035680   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:01:58.035688   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:01:58.035699   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:01:58.116142   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:01:58.116177   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:01:58.157683   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:01:58.157717   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:01:58.211686   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:01:58.211722   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:01:58.226385   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:01:58.226409   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:01:58.302287   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:01:54.937208   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:57.437396   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:59.438489   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:56.326836   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:58.818671   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:01:57.955980   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:00.455212   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:00.802544   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:00.816325   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:00.816405   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:00.853031   60933 cri.go:89] found id: ""
	I1216 21:02:00.853057   60933 logs.go:282] 0 containers: []
	W1216 21:02:00.853065   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:00.853070   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:00.853122   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:00.891040   60933 cri.go:89] found id: ""
	I1216 21:02:00.891071   60933 logs.go:282] 0 containers: []
	W1216 21:02:00.891082   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:00.891089   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:00.891151   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:00.929145   60933 cri.go:89] found id: ""
	I1216 21:02:00.929168   60933 logs.go:282] 0 containers: []
	W1216 21:02:00.929175   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:00.929181   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:00.929227   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:00.976469   60933 cri.go:89] found id: ""
	I1216 21:02:00.976492   60933 logs.go:282] 0 containers: []
	W1216 21:02:00.976500   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:00.976505   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:00.976553   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:01.015053   60933 cri.go:89] found id: ""
	I1216 21:02:01.015078   60933 logs.go:282] 0 containers: []
	W1216 21:02:01.015086   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:01.015092   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:01.015150   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:01.052859   60933 cri.go:89] found id: ""
	I1216 21:02:01.052891   60933 logs.go:282] 0 containers: []
	W1216 21:02:01.052902   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:01.052909   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:01.053028   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:01.091209   60933 cri.go:89] found id: ""
	I1216 21:02:01.091238   60933 logs.go:282] 0 containers: []
	W1216 21:02:01.091259   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:01.091266   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:01.091341   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:01.127013   60933 cri.go:89] found id: ""
	I1216 21:02:01.127038   60933 logs.go:282] 0 containers: []
	W1216 21:02:01.127047   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:01.127058   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:01.127072   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:01.179642   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:01.179697   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:01.196390   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:01.196416   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:01.275446   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:01.275478   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:01.275493   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:01.354391   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:01.354429   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:03.897672   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:03.911596   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:03.911654   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:03.955700   60933 cri.go:89] found id: ""
	I1216 21:02:03.955726   60933 logs.go:282] 0 containers: []
	W1216 21:02:03.955735   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:03.955741   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:03.955803   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:03.995661   60933 cri.go:89] found id: ""
	I1216 21:02:03.995696   60933 logs.go:282] 0 containers: []
	W1216 21:02:03.995706   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:03.995713   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:03.995772   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:04.031368   60933 cri.go:89] found id: ""
	I1216 21:02:04.031391   60933 logs.go:282] 0 containers: []
	W1216 21:02:04.031398   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:04.031406   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:04.031455   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:04.067633   60933 cri.go:89] found id: ""
	I1216 21:02:04.067659   60933 logs.go:282] 0 containers: []
	W1216 21:02:04.067666   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:04.067671   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:04.067719   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:04.105734   60933 cri.go:89] found id: ""
	I1216 21:02:04.105758   60933 logs.go:282] 0 containers: []
	W1216 21:02:04.105768   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:04.105773   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:04.105824   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:04.146542   60933 cri.go:89] found id: ""
	I1216 21:02:04.146564   60933 logs.go:282] 0 containers: []
	W1216 21:02:04.146571   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:04.146577   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:04.146623   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:04.184433   60933 cri.go:89] found id: ""
	I1216 21:02:04.184462   60933 logs.go:282] 0 containers: []
	W1216 21:02:04.184473   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:04.184480   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:04.184551   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:04.223077   60933 cri.go:89] found id: ""
	I1216 21:02:04.223106   60933 logs.go:282] 0 containers: []
	W1216 21:02:04.223117   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:04.223127   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:04.223140   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:04.279618   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:04.279656   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:04.295841   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:04.295865   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:04.372609   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:04.372632   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:04.372648   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:01.937175   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:03.937249   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:00.819801   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:02.820563   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:05.320087   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:02.955461   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:05.455023   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:07.456981   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:04.457597   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:04.457631   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:07.006004   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:07.020394   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:07.020537   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:07.064242   60933 cri.go:89] found id: ""
	I1216 21:02:07.064274   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.064283   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:07.064289   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:07.064337   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:07.108865   60933 cri.go:89] found id: ""
	I1216 21:02:07.108899   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.108910   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:07.108917   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:07.108985   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:07.149021   60933 cri.go:89] found id: ""
	I1216 21:02:07.149051   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.149060   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:07.149066   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:07.149120   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:07.187808   60933 cri.go:89] found id: ""
	I1216 21:02:07.187833   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.187843   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:07.187850   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:07.187912   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:07.228748   60933 cri.go:89] found id: ""
	I1216 21:02:07.228774   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.228785   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:07.228792   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:07.228853   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:07.267961   60933 cri.go:89] found id: ""
	I1216 21:02:07.267996   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.268012   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:07.268021   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:07.268099   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:07.312464   60933 cri.go:89] found id: ""
	I1216 21:02:07.312491   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.312498   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:07.312503   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:07.312554   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:07.351902   60933 cri.go:89] found id: ""
	I1216 21:02:07.351933   60933 logs.go:282] 0 containers: []
	W1216 21:02:07.351946   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:07.351958   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:07.351974   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:07.405985   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:07.406050   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:07.420796   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:07.420842   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:07.506527   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:07.506559   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:07.506574   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:07.587965   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:07.588001   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:06.437434   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:08.937843   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:07.320229   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:09.819940   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:09.954900   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:11.955004   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:10.132876   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:10.146785   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:10.146858   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:10.189278   60933 cri.go:89] found id: ""
	I1216 21:02:10.189312   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.189324   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:10.189332   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:10.189402   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:10.228331   60933 cri.go:89] found id: ""
	I1216 21:02:10.228370   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.228378   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:10.228383   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:10.228436   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:10.266424   60933 cri.go:89] found id: ""
	I1216 21:02:10.266458   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.266470   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:10.266478   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:10.266542   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:10.305865   60933 cri.go:89] found id: ""
	I1216 21:02:10.305890   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.305902   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:10.305909   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:10.305968   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:10.344211   60933 cri.go:89] found id: ""
	I1216 21:02:10.344239   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.344247   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:10.344253   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:10.344314   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:10.381939   60933 cri.go:89] found id: ""
	I1216 21:02:10.381993   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.382004   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:10.382011   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:10.382076   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:10.418882   60933 cri.go:89] found id: ""
	I1216 21:02:10.418908   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.418915   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:10.418921   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:10.418972   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:10.458397   60933 cri.go:89] found id: ""
	I1216 21:02:10.458425   60933 logs.go:282] 0 containers: []
	W1216 21:02:10.458434   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:10.458447   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:10.458462   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:10.472152   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:10.472180   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:10.545888   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:10.545913   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:10.545926   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:10.627223   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:10.627293   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:10.676606   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:10.676633   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:13.227283   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:13.242871   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:13.242954   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:13.280676   60933 cri.go:89] found id: ""
	I1216 21:02:13.280711   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.280723   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:13.280731   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:13.280786   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:13.321357   60933 cri.go:89] found id: ""
	I1216 21:02:13.321389   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.321400   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:13.321408   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:13.321474   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:13.359002   60933 cri.go:89] found id: ""
	I1216 21:02:13.359030   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.359042   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:13.359050   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:13.359116   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:13.395879   60933 cri.go:89] found id: ""
	I1216 21:02:13.395922   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.395941   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:13.395950   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:13.396017   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:13.436761   60933 cri.go:89] found id: ""
	I1216 21:02:13.436781   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.436788   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:13.436793   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:13.436852   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:13.478839   60933 cri.go:89] found id: ""
	I1216 21:02:13.478869   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.478877   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:13.478883   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:13.478947   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:13.520013   60933 cri.go:89] found id: ""
	I1216 21:02:13.520037   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.520044   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:13.520050   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:13.520124   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:13.556973   60933 cri.go:89] found id: ""
	I1216 21:02:13.557001   60933 logs.go:282] 0 containers: []
	W1216 21:02:13.557013   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:13.557023   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:13.557039   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:13.613499   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:13.613537   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:13.628689   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:13.628724   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:13.706556   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:13.706576   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:13.706589   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:13.786379   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:13.786419   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:11.436179   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:13.436800   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:11.820109   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:13.820778   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:14.457666   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:16.955591   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:16.333578   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:16.347948   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:16.348020   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:16.386928   60933 cri.go:89] found id: ""
	I1216 21:02:16.386955   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.386963   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:16.386969   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:16.387033   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:16.425192   60933 cri.go:89] found id: ""
	I1216 21:02:16.425253   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.425265   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:16.425273   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:16.425355   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:16.465522   60933 cri.go:89] found id: ""
	I1216 21:02:16.465554   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.465565   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:16.465573   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:16.465638   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:16.504567   60933 cri.go:89] found id: ""
	I1216 21:02:16.504605   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.504616   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:16.504624   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:16.504694   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:16.541823   60933 cri.go:89] found id: ""
	I1216 21:02:16.541852   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.541864   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:16.541872   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:16.541942   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:16.580898   60933 cri.go:89] found id: ""
	I1216 21:02:16.580927   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.580938   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:16.580946   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:16.581003   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:16.626006   60933 cri.go:89] found id: ""
	I1216 21:02:16.626036   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.626046   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:16.626053   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:16.626109   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:16.662686   60933 cri.go:89] found id: ""
	I1216 21:02:16.662712   60933 logs.go:282] 0 containers: []
	W1216 21:02:16.662719   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:16.662728   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:16.662740   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:16.717939   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:16.717978   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:16.733431   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:16.733466   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:16.807379   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:16.807409   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:16.807421   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:16.896455   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:16.896492   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:15.437791   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:17.935778   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:16.321167   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:18.819624   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:18.955621   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:20.956220   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:19.442959   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:19.458684   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:19.458749   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:19.499907   60933 cri.go:89] found id: ""
	I1216 21:02:19.499938   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.499947   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:19.499954   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:19.500002   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:19.538010   60933 cri.go:89] found id: ""
	I1216 21:02:19.538035   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.538043   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:19.538049   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:19.538148   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:19.577097   60933 cri.go:89] found id: ""
	I1216 21:02:19.577131   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.577139   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:19.577145   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:19.577196   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:19.617288   60933 cri.go:89] found id: ""
	I1216 21:02:19.617316   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.617326   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:19.617332   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:19.617392   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:19.658066   60933 cri.go:89] found id: ""
	I1216 21:02:19.658090   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.658097   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:19.658103   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:19.658153   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:19.696077   60933 cri.go:89] found id: ""
	I1216 21:02:19.696108   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.696121   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:19.696131   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:19.696189   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:19.737657   60933 cri.go:89] found id: ""
	I1216 21:02:19.737692   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.737704   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:19.737712   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:19.737776   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:19.778699   60933 cri.go:89] found id: ""
	I1216 21:02:19.778729   60933 logs.go:282] 0 containers: []
	W1216 21:02:19.778738   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:19.778746   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:19.778757   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:19.841941   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:19.841979   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:19.857752   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:19.857788   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:19.935980   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:19.936004   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:19.936020   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:20.019999   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:20.020046   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:22.566398   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:22.580376   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:22.580472   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:22.620240   60933 cri.go:89] found id: ""
	I1216 21:02:22.620273   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.620284   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:22.620292   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:22.620355   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:22.656413   60933 cri.go:89] found id: ""
	I1216 21:02:22.656444   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.656455   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:22.656463   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:22.656531   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:22.690956   60933 cri.go:89] found id: ""
	I1216 21:02:22.690978   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.690986   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:22.690992   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:22.691040   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:22.734851   60933 cri.go:89] found id: ""
	I1216 21:02:22.734885   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.734895   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:22.734903   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:22.734969   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:22.774416   60933 cri.go:89] found id: ""
	I1216 21:02:22.774450   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.774461   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:22.774467   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:22.774535   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:22.811162   60933 cri.go:89] found id: ""
	I1216 21:02:22.811192   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.811204   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:22.811212   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:22.811296   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:22.851955   60933 cri.go:89] found id: ""
	I1216 21:02:22.851980   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.851987   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:22.851993   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:22.852051   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:22.888699   60933 cri.go:89] found id: ""
	I1216 21:02:22.888725   60933 logs.go:282] 0 containers: []
	W1216 21:02:22.888736   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:22.888747   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:22.888769   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:22.944065   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:22.944100   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:22.960842   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:22.960872   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:23.036229   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:23.036251   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:23.036263   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:23.122493   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:23.122535   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:19.936687   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:21.937222   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:24.437190   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:20.820544   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:22.820771   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:25.319776   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:22.956523   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:25.456180   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:25.667995   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:25.682152   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:25.682222   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:25.719092   60933 cri.go:89] found id: ""
	I1216 21:02:25.719120   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.719130   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:25.719135   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:25.719190   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:25.757668   60933 cri.go:89] found id: ""
	I1216 21:02:25.757702   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.757712   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:25.757720   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:25.757791   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:25.809743   60933 cri.go:89] found id: ""
	I1216 21:02:25.809776   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.809787   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:25.809795   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:25.809857   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:25.849181   60933 cri.go:89] found id: ""
	I1216 21:02:25.849211   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.849222   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:25.849230   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:25.849295   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:25.891032   60933 cri.go:89] found id: ""
	I1216 21:02:25.891079   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.891091   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:25.891098   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:25.891169   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:25.930549   60933 cri.go:89] found id: ""
	I1216 21:02:25.930575   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.930583   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:25.930589   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:25.930639   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:25.971709   60933 cri.go:89] found id: ""
	I1216 21:02:25.971736   60933 logs.go:282] 0 containers: []
	W1216 21:02:25.971744   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:25.971749   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:25.971797   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:26.007728   60933 cri.go:89] found id: ""
	I1216 21:02:26.007760   60933 logs.go:282] 0 containers: []
	W1216 21:02:26.007769   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:26.007778   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:26.007791   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:26.059710   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:26.059752   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:26.074596   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:26.074627   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:26.145892   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:26.145913   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:26.145924   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:26.225961   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:26.226000   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:28.772974   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:28.787001   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:28.787078   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:28.828176   60933 cri.go:89] found id: ""
	I1216 21:02:28.828206   60933 logs.go:282] 0 containers: []
	W1216 21:02:28.828214   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:28.828223   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:28.828292   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:28.872750   60933 cri.go:89] found id: ""
	I1216 21:02:28.872781   60933 logs.go:282] 0 containers: []
	W1216 21:02:28.872792   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:28.872798   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:28.872859   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:28.914844   60933 cri.go:89] found id: ""
	I1216 21:02:28.914871   60933 logs.go:282] 0 containers: []
	W1216 21:02:28.914879   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:28.914884   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:28.914934   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:28.953541   60933 cri.go:89] found id: ""
	I1216 21:02:28.953569   60933 logs.go:282] 0 containers: []
	W1216 21:02:28.953579   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:28.953587   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:28.953647   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:28.992768   60933 cri.go:89] found id: ""
	I1216 21:02:28.992797   60933 logs.go:282] 0 containers: []
	W1216 21:02:28.992808   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:28.992816   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:28.992882   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:29.030069   60933 cri.go:89] found id: ""
	I1216 21:02:29.030102   60933 logs.go:282] 0 containers: []
	W1216 21:02:29.030113   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:29.030121   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:29.030187   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:29.068629   60933 cri.go:89] found id: ""
	I1216 21:02:29.068658   60933 logs.go:282] 0 containers: []
	W1216 21:02:29.068666   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:29.068677   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:29.068726   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:29.103664   60933 cri.go:89] found id: ""
	I1216 21:02:29.103697   60933 logs.go:282] 0 containers: []
	W1216 21:02:29.103708   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:29.103719   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:29.103732   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:29.151225   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:29.151276   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:29.209448   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:29.209499   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:29.225232   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:29.225257   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:29.309812   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:29.309832   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:29.309846   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:26.937193   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:28.937302   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:27.320052   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:29.820220   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:27.956244   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:29.957111   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:32.456969   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:31.896263   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:31.912378   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:31.912455   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:31.950479   60933 cri.go:89] found id: ""
	I1216 21:02:31.950508   60933 logs.go:282] 0 containers: []
	W1216 21:02:31.950527   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:31.950535   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:31.950600   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:31.990479   60933 cri.go:89] found id: ""
	I1216 21:02:31.990504   60933 logs.go:282] 0 containers: []
	W1216 21:02:31.990515   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:31.990533   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:31.990599   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:32.032808   60933 cri.go:89] found id: ""
	I1216 21:02:32.032834   60933 logs.go:282] 0 containers: []
	W1216 21:02:32.032843   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:32.032853   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:32.032913   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:32.069719   60933 cri.go:89] found id: ""
	I1216 21:02:32.069748   60933 logs.go:282] 0 containers: []
	W1216 21:02:32.069759   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:32.069772   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:32.069830   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:32.106652   60933 cri.go:89] found id: ""
	I1216 21:02:32.106685   60933 logs.go:282] 0 containers: []
	W1216 21:02:32.106694   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:32.106701   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:32.106767   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:32.145921   60933 cri.go:89] found id: ""
	I1216 21:02:32.145949   60933 logs.go:282] 0 containers: []
	W1216 21:02:32.145957   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:32.145963   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:32.146014   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:32.206313   60933 cri.go:89] found id: ""
	I1216 21:02:32.206342   60933 logs.go:282] 0 containers: []
	W1216 21:02:32.206351   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:32.206356   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:32.206410   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:32.262757   60933 cri.go:89] found id: ""
	I1216 21:02:32.262794   60933 logs.go:282] 0 containers: []
	W1216 21:02:32.262806   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:32.262818   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:32.262832   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:32.320221   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:32.320251   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:32.375395   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:32.375437   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:32.391103   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:32.391137   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:32.474709   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:32.474741   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:32.474757   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:31.436689   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:33.436921   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:32.320631   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:34.819726   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:34.956369   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:37.455577   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:35.058809   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:35.073074   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:35.073157   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:35.115280   60933 cri.go:89] found id: ""
	I1216 21:02:35.115305   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.115312   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:35.115318   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:35.115378   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:35.151561   60933 cri.go:89] found id: ""
	I1216 21:02:35.151589   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.151597   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:35.151603   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:35.151654   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:35.192061   60933 cri.go:89] found id: ""
	I1216 21:02:35.192088   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.192095   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:35.192111   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:35.192161   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:35.231493   60933 cri.go:89] found id: ""
	I1216 21:02:35.231523   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.231531   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:35.231538   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:35.231586   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:35.271236   60933 cri.go:89] found id: ""
	I1216 21:02:35.271291   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.271300   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:35.271306   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:35.271368   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:35.309950   60933 cri.go:89] found id: ""
	I1216 21:02:35.309980   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.309991   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:35.309999   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:35.310062   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:35.347762   60933 cri.go:89] found id: ""
	I1216 21:02:35.347790   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.347797   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:35.347803   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:35.347851   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:35.390732   60933 cri.go:89] found id: ""
	I1216 21:02:35.390757   60933 logs.go:282] 0 containers: []
	W1216 21:02:35.390765   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:35.390774   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:35.390785   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:35.447068   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:35.447112   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:35.462873   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:35.462904   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:35.541120   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:35.541145   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:35.541162   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:35.627073   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:35.627120   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:38.170994   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:38.194371   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:38.194434   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:38.248023   60933 cri.go:89] found id: ""
	I1216 21:02:38.248050   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.248061   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:38.248069   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:38.248147   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:38.300143   60933 cri.go:89] found id: ""
	I1216 21:02:38.300175   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.300185   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:38.300193   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:38.300253   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:38.345273   60933 cri.go:89] found id: ""
	I1216 21:02:38.345300   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.345308   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:38.345314   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:38.345389   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:38.383032   60933 cri.go:89] found id: ""
	I1216 21:02:38.383066   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.383075   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:38.383081   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:38.383135   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:38.426042   60933 cri.go:89] found id: ""
	I1216 21:02:38.426074   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.426086   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:38.426094   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:38.426159   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:38.467596   60933 cri.go:89] found id: ""
	I1216 21:02:38.467625   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.467634   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:38.467640   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:38.467692   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:38.509340   60933 cri.go:89] found id: ""
	I1216 21:02:38.509380   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.509391   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:38.509399   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:38.509470   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:38.549306   60933 cri.go:89] found id: ""
	I1216 21:02:38.549337   60933 logs.go:282] 0 containers: []
	W1216 21:02:38.549354   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:38.549365   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:38.549381   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:38.564091   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:38.564131   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:38.639173   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:38.639201   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:38.639219   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:38.716320   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:38.716376   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:38.756779   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:38.756815   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:35.437230   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:37.938595   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:36.820302   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:39.319712   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:39.954558   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:41.955761   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:41.310680   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:41.327606   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:41.327684   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:41.371622   60933 cri.go:89] found id: ""
	I1216 21:02:41.371657   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.371670   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:41.371679   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:41.371739   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:41.408149   60933 cri.go:89] found id: ""
	I1216 21:02:41.408187   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.408198   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:41.408203   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:41.408252   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:41.448445   60933 cri.go:89] found id: ""
	I1216 21:02:41.448471   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.448478   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:41.448484   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:41.448533   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:41.489957   60933 cri.go:89] found id: ""
	I1216 21:02:41.489989   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.490000   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:41.490007   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:41.490069   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:41.532891   60933 cri.go:89] found id: ""
	I1216 21:02:41.532918   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.532930   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:41.532937   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:41.532992   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:41.570315   60933 cri.go:89] found id: ""
	I1216 21:02:41.570342   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.570351   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:41.570357   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:41.570455   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:41.606833   60933 cri.go:89] found id: ""
	I1216 21:02:41.606867   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.606880   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:41.606890   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:41.606959   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:41.643862   60933 cri.go:89] found id: ""
	I1216 21:02:41.643886   60933 logs.go:282] 0 containers: []
	W1216 21:02:41.643894   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:41.643902   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:41.643914   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:41.657621   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:41.657654   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:41.732256   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:41.732281   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:41.732295   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:41.822045   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:41.822081   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:41.863900   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:41.863933   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:40.436149   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:42.436247   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:44.436916   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:41.321155   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:43.819721   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:43.956057   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:46.455802   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:44.425154   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:44.440148   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:44.440223   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:44.478216   60933 cri.go:89] found id: ""
	I1216 21:02:44.478247   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.478258   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:44.478266   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:44.478329   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:44.517054   60933 cri.go:89] found id: ""
	I1216 21:02:44.517078   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.517084   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:44.517090   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:44.517137   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:44.554683   60933 cri.go:89] found id: ""
	I1216 21:02:44.554778   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.554801   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:44.554845   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:44.554927   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:44.600748   60933 cri.go:89] found id: ""
	I1216 21:02:44.600788   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.600800   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:44.600809   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:44.600863   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:44.637564   60933 cri.go:89] found id: ""
	I1216 21:02:44.637592   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.637600   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:44.637606   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:44.637656   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:44.676619   60933 cri.go:89] found id: ""
	I1216 21:02:44.676662   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.676674   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:44.676683   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:44.676755   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:44.715920   60933 cri.go:89] found id: ""
	I1216 21:02:44.715956   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.715964   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:44.715970   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:44.716027   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:44.755134   60933 cri.go:89] found id: ""
	I1216 21:02:44.755167   60933 logs.go:282] 0 containers: []
	W1216 21:02:44.755179   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:44.755191   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:44.755202   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:44.796135   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:44.796164   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:44.850550   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:44.850593   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:44.865278   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:44.865305   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:44.942987   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:44.943013   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:44.943026   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:47.529850   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:47.546292   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:47.546369   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:47.589597   60933 cri.go:89] found id: ""
	I1216 21:02:47.589627   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.589640   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:47.589648   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:47.589713   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:47.630998   60933 cri.go:89] found id: ""
	I1216 21:02:47.631030   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.631043   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:47.631051   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:47.631118   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:47.670118   60933 cri.go:89] found id: ""
	I1216 21:02:47.670150   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.670162   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:47.670169   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:47.670233   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:47.714516   60933 cri.go:89] found id: ""
	I1216 21:02:47.714549   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.714560   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:47.714568   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:47.714631   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:47.752042   60933 cri.go:89] found id: ""
	I1216 21:02:47.752074   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.752086   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:47.752093   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:47.752158   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:47.793612   60933 cri.go:89] found id: ""
	I1216 21:02:47.793645   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.793656   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:47.793664   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:47.793734   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:47.833489   60933 cri.go:89] found id: ""
	I1216 21:02:47.833518   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.833529   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:47.833541   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:47.833602   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:47.869744   60933 cri.go:89] found id: ""
	I1216 21:02:47.869772   60933 logs.go:282] 0 containers: []
	W1216 21:02:47.869783   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:47.869793   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:47.869809   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:47.910640   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:47.910674   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:47.965747   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:47.965781   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:47.979760   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:47.979786   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:48.056887   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:48.056917   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:48.056933   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:46.439409   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:48.937248   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:46.320935   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:48.820700   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:48.955697   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:50.955859   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:50.641224   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:50.657267   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:50.657346   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:50.696890   60933 cri.go:89] found id: ""
	I1216 21:02:50.696916   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.696924   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:50.696930   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:50.696993   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:50.734485   60933 cri.go:89] found id: ""
	I1216 21:02:50.734514   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.734524   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:50.734533   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:50.734598   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:50.776241   60933 cri.go:89] found id: ""
	I1216 21:02:50.776268   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.776277   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:50.776283   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:50.776358   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:50.816449   60933 cri.go:89] found id: ""
	I1216 21:02:50.816482   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.816493   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:50.816501   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:50.816561   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:50.857458   60933 cri.go:89] found id: ""
	I1216 21:02:50.857481   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.857488   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:50.857494   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:50.857556   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:50.895367   60933 cri.go:89] found id: ""
	I1216 21:02:50.895391   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.895398   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:50.895404   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:50.895466   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:50.934101   60933 cri.go:89] found id: ""
	I1216 21:02:50.934128   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.934138   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:50.934152   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:50.934212   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:50.978625   60933 cri.go:89] found id: ""
	I1216 21:02:50.978654   60933 logs.go:282] 0 containers: []
	W1216 21:02:50.978665   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:50.978675   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:50.978688   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:51.061867   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:51.061908   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:51.101188   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:51.101228   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:51.157426   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:51.157470   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:51.172835   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:51.172882   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:51.247678   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:53.748503   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:53.763357   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:53.763425   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:53.807963   60933 cri.go:89] found id: ""
	I1216 21:02:53.807990   60933 logs.go:282] 0 containers: []
	W1216 21:02:53.807999   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:53.808005   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:53.808063   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:53.846840   60933 cri.go:89] found id: ""
	I1216 21:02:53.846867   60933 logs.go:282] 0 containers: []
	W1216 21:02:53.846876   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:53.846881   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:53.846929   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:53.885099   60933 cri.go:89] found id: ""
	I1216 21:02:53.885131   60933 logs.go:282] 0 containers: []
	W1216 21:02:53.885146   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:53.885156   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:53.885226   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:53.923859   60933 cri.go:89] found id: ""
	I1216 21:02:53.923890   60933 logs.go:282] 0 containers: []
	W1216 21:02:53.923901   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:53.923908   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:53.923972   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:53.964150   60933 cri.go:89] found id: ""
	I1216 21:02:53.964176   60933 logs.go:282] 0 containers: []
	W1216 21:02:53.964186   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:53.964201   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:53.964265   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:54.004676   60933 cri.go:89] found id: ""
	I1216 21:02:54.004707   60933 logs.go:282] 0 containers: []
	W1216 21:02:54.004718   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:54.004725   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:54.004789   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:54.042560   60933 cri.go:89] found id: ""
	I1216 21:02:54.042585   60933 logs.go:282] 0 containers: []
	W1216 21:02:54.042595   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:54.042603   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:54.042666   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:54.081002   60933 cri.go:89] found id: ""
	I1216 21:02:54.081030   60933 logs.go:282] 0 containers: []
	W1216 21:02:54.081038   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:54.081046   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:54.081058   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:54.132825   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:54.132865   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:54.147793   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:54.147821   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:54.226668   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:54.226692   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:54.226704   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:54.307792   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:54.307832   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:50.938230   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:53.436746   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:50.820949   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:53.320283   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:52.957187   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:54.958212   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:57.456612   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:56.852207   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:56.866404   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:56.866469   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:02:56.911786   60933 cri.go:89] found id: ""
	I1216 21:02:56.911811   60933 logs.go:282] 0 containers: []
	W1216 21:02:56.911820   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:02:56.911829   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:02:56.911886   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:02:56.953491   60933 cri.go:89] found id: ""
	I1216 21:02:56.953520   60933 logs.go:282] 0 containers: []
	W1216 21:02:56.953535   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:02:56.953543   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:02:56.953610   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:02:56.991569   60933 cri.go:89] found id: ""
	I1216 21:02:56.991605   60933 logs.go:282] 0 containers: []
	W1216 21:02:56.991616   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:02:56.991622   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:02:56.991685   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:02:57.026808   60933 cri.go:89] found id: ""
	I1216 21:02:57.026837   60933 logs.go:282] 0 containers: []
	W1216 21:02:57.026845   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:02:57.026851   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:02:57.026913   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:02:57.065539   60933 cri.go:89] found id: ""
	I1216 21:02:57.065569   60933 logs.go:282] 0 containers: []
	W1216 21:02:57.065577   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:02:57.065583   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:02:57.065642   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:02:57.103911   60933 cri.go:89] found id: ""
	I1216 21:02:57.103942   60933 logs.go:282] 0 containers: []
	W1216 21:02:57.103952   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:02:57.103960   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:02:57.104015   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:02:57.141177   60933 cri.go:89] found id: ""
	I1216 21:02:57.141200   60933 logs.go:282] 0 containers: []
	W1216 21:02:57.141207   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:02:57.141213   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:02:57.141262   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:02:57.178532   60933 cri.go:89] found id: ""
	I1216 21:02:57.178590   60933 logs.go:282] 0 containers: []
	W1216 21:02:57.178604   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:02:57.178614   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:02:57.178629   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:02:57.234811   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:02:57.234846   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:02:57.251540   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:02:57.251569   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:02:57.329029   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:02:57.329061   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:02:57.329077   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:02:57.412624   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:02:57.412665   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:55.436981   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:57.438061   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:55.819607   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:57.819648   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:59.820705   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:59.955043   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:01.956284   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:02:59.960422   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:02:59.974889   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:02:59.974966   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:00.012641   60933 cri.go:89] found id: ""
	I1216 21:03:00.012669   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.012676   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:00.012682   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:00.012730   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:00.053730   60933 cri.go:89] found id: ""
	I1216 21:03:00.053766   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.053778   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:00.053785   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:00.053847   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:00.091213   60933 cri.go:89] found id: ""
	I1216 21:03:00.091261   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.091274   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:00.091283   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:00.091357   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:00.131357   60933 cri.go:89] found id: ""
	I1216 21:03:00.131382   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.131390   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:00.131396   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:00.131460   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:00.168331   60933 cri.go:89] found id: ""
	I1216 21:03:00.168362   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.168373   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:00.168380   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:00.168446   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:00.208326   60933 cri.go:89] found id: ""
	I1216 21:03:00.208360   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.208369   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:00.208377   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:00.208440   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:00.245775   60933 cri.go:89] found id: ""
	I1216 21:03:00.245800   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.245808   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:00.245814   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:00.245863   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:00.283062   60933 cri.go:89] found id: ""
	I1216 21:03:00.283091   60933 logs.go:282] 0 containers: []
	W1216 21:03:00.283100   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:00.283108   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:00.283119   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:00.358767   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:00.358787   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:00.358799   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:00.443422   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:00.443460   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:00.491511   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:00.491551   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:00.566131   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:00.566172   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:03.080319   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:03.094733   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:03.094818   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:03.132388   60933 cri.go:89] found id: ""
	I1216 21:03:03.132419   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.132428   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:03.132433   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:03.132488   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:03.172345   60933 cri.go:89] found id: ""
	I1216 21:03:03.172374   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.172386   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:03.172393   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:03.172474   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:03.210444   60933 cri.go:89] found id: ""
	I1216 21:03:03.210479   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.210488   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:03.210494   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:03.210544   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:03.248605   60933 cri.go:89] found id: ""
	I1216 21:03:03.248644   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.248656   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:03.248664   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:03.248723   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:03.286822   60933 cri.go:89] found id: ""
	I1216 21:03:03.286854   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.286862   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:03.286868   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:03.286921   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:03.329304   60933 cri.go:89] found id: ""
	I1216 21:03:03.329333   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.329344   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:03.329352   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:03.329417   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:03.367337   60933 cri.go:89] found id: ""
	I1216 21:03:03.367361   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.367368   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:03.367373   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:03.367420   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:03.409799   60933 cri.go:89] found id: ""
	I1216 21:03:03.409821   60933 logs.go:282] 0 containers: []
	W1216 21:03:03.409829   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:03.409838   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:03.409850   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:03.466941   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:03.466976   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:03.483090   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:03.483117   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:03.566835   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:03.566860   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:03.566878   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:03.649747   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:03.649793   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:02:59.936221   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:01.936251   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:03.936714   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:02.319063   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:04.319653   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:03.956397   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:05.956531   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:06.193505   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:06.207797   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:06.207878   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:06.245401   60933 cri.go:89] found id: ""
	I1216 21:03:06.245437   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.245448   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:06.245456   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:06.245521   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:06.301205   60933 cri.go:89] found id: ""
	I1216 21:03:06.301239   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.301250   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:06.301257   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:06.301326   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:06.340325   60933 cri.go:89] found id: ""
	I1216 21:03:06.340352   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.340362   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:06.340369   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:06.340429   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:06.378321   60933 cri.go:89] found id: ""
	I1216 21:03:06.378351   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.378359   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:06.378365   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:06.378422   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:06.416354   60933 cri.go:89] found id: ""
	I1216 21:03:06.416390   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.416401   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:06.416409   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:06.416473   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:06.459926   60933 cri.go:89] found id: ""
	I1216 21:03:06.459955   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.459967   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:06.459975   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:06.460063   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:06.501818   60933 cri.go:89] found id: ""
	I1216 21:03:06.501849   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.501860   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:06.501866   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:06.501926   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:06.537552   60933 cri.go:89] found id: ""
	I1216 21:03:06.537583   60933 logs.go:282] 0 containers: []
	W1216 21:03:06.537598   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:06.537607   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:06.537621   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:06.592170   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:06.592212   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:06.607148   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:06.607183   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:06.676114   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:06.676140   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:06.676151   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:06.756009   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:06.756052   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:09.298166   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:09.313104   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:09.313189   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:09.356598   60933 cri.go:89] found id: ""
	I1216 21:03:09.356625   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.356640   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:09.356649   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:09.356715   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:05.937241   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:07.938858   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:06.322260   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:08.818974   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:08.455838   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:10.955332   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:09.395406   60933 cri.go:89] found id: ""
	I1216 21:03:09.395439   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.395449   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:09.395456   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:09.395521   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:09.440401   60933 cri.go:89] found id: ""
	I1216 21:03:09.440423   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.440430   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:09.440435   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:09.440504   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:09.478798   60933 cri.go:89] found id: ""
	I1216 21:03:09.478828   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.478843   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:09.478853   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:09.478921   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:09.515542   60933 cri.go:89] found id: ""
	I1216 21:03:09.515575   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.515587   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:09.515596   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:09.515654   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:09.554150   60933 cri.go:89] found id: ""
	I1216 21:03:09.554183   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.554194   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:09.554205   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:09.554279   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:09.591699   60933 cri.go:89] found id: ""
	I1216 21:03:09.591730   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.591740   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:09.591747   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:09.591811   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:09.629938   60933 cri.go:89] found id: ""
	I1216 21:03:09.629970   60933 logs.go:282] 0 containers: []
	W1216 21:03:09.629980   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:09.629991   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:09.630008   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:09.711255   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:09.711284   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:09.711300   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:09.790202   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:09.790243   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:09.839567   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:09.839597   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:09.893010   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:09.893050   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:12.409934   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:12.423715   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:12.423789   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:12.461995   60933 cri.go:89] found id: ""
	I1216 21:03:12.462038   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.462046   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:12.462052   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:12.462101   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:12.501738   60933 cri.go:89] found id: ""
	I1216 21:03:12.501769   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.501779   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:12.501785   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:12.501833   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:12.541758   60933 cri.go:89] found id: ""
	I1216 21:03:12.541785   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.541795   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:12.541802   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:12.541850   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:12.579173   60933 cri.go:89] found id: ""
	I1216 21:03:12.579199   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.579206   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:12.579212   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:12.579302   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:12.624382   60933 cri.go:89] found id: ""
	I1216 21:03:12.624407   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.624418   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:12.624426   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:12.624488   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:12.665139   60933 cri.go:89] found id: ""
	I1216 21:03:12.665178   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.665190   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:12.665200   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:12.665274   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:12.711586   60933 cri.go:89] found id: ""
	I1216 21:03:12.711611   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.711619   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:12.711627   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:12.711678   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:12.761566   60933 cri.go:89] found id: ""
	I1216 21:03:12.761600   60933 logs.go:282] 0 containers: []
	W1216 21:03:12.761612   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:12.761624   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:12.761640   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:12.824282   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:12.824315   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:12.839335   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:12.839371   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:12.918317   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:12.918341   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:12.918357   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:13.000375   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:13.000410   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:10.438136   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:12.936742   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:11.319284   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:13.320036   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:15.322965   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:12.955450   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:14.956186   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:16.956603   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:15.542372   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:15.556877   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:15.556960   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:15.599345   60933 cri.go:89] found id: ""
	I1216 21:03:15.599378   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.599389   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:15.599414   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:15.599479   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:15.642072   60933 cri.go:89] found id: ""
	I1216 21:03:15.642106   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.642116   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:15.642124   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:15.642189   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:15.679989   60933 cri.go:89] found id: ""
	I1216 21:03:15.680025   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.680036   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:15.680044   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:15.680103   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:15.718343   60933 cri.go:89] found id: ""
	I1216 21:03:15.718371   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.718378   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:15.718384   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:15.718433   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:15.759937   60933 cri.go:89] found id: ""
	I1216 21:03:15.759971   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.759981   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:15.759988   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:15.760081   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:15.801434   60933 cri.go:89] found id: ""
	I1216 21:03:15.801463   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.801471   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:15.801477   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:15.801540   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:15.841855   60933 cri.go:89] found id: ""
	I1216 21:03:15.841879   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.841886   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:15.841892   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:15.841962   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:15.883951   60933 cri.go:89] found id: ""
	I1216 21:03:15.883974   60933 logs.go:282] 0 containers: []
	W1216 21:03:15.883982   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:15.883990   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:15.884004   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:15.960868   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:15.960902   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:16.005700   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:16.005730   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:16.061128   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:16.061165   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:16.075601   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:16.075630   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:16.147810   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:18.648677   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:18.663298   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:18.663367   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:18.713281   60933 cri.go:89] found id: ""
	I1216 21:03:18.713313   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.713324   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:18.713332   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:18.713396   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:18.764861   60933 cri.go:89] found id: ""
	I1216 21:03:18.764892   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.764905   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:18.764912   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:18.764978   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:18.816140   60933 cri.go:89] found id: ""
	I1216 21:03:18.816170   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.816180   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:18.816188   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:18.816251   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:18.852118   60933 cri.go:89] found id: ""
	I1216 21:03:18.852151   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.852163   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:18.852171   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:18.852235   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:18.887996   60933 cri.go:89] found id: ""
	I1216 21:03:18.888018   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.888025   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:18.888031   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:18.888089   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:18.925415   60933 cri.go:89] found id: ""
	I1216 21:03:18.925437   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.925445   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:18.925451   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:18.925498   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:18.964853   60933 cri.go:89] found id: ""
	I1216 21:03:18.964884   60933 logs.go:282] 0 containers: []
	W1216 21:03:18.964892   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:18.964897   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:18.964964   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:19.000822   60933 cri.go:89] found id: ""
	I1216 21:03:19.000848   60933 logs.go:282] 0 containers: []
	W1216 21:03:19.000856   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:19.000865   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:19.000879   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:19.051571   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:19.051612   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:19.066737   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:19.066767   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:19.143120   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:19.143144   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:19.143156   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:19.229811   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:19.229850   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:15.437189   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:17.439345   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:17.820374   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:19.820460   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:19.455707   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:21.955275   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:21.776440   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:21.792869   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:21.792951   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:21.831100   60933 cri.go:89] found id: ""
	I1216 21:03:21.831127   60933 logs.go:282] 0 containers: []
	W1216 21:03:21.831134   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:21.831140   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:21.831196   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:21.869124   60933 cri.go:89] found id: ""
	I1216 21:03:21.869147   60933 logs.go:282] 0 containers: []
	W1216 21:03:21.869155   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:21.869160   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:21.869215   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:21.909891   60933 cri.go:89] found id: ""
	I1216 21:03:21.909926   60933 logs.go:282] 0 containers: []
	W1216 21:03:21.909938   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:21.909946   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:21.910032   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:21.949140   60933 cri.go:89] found id: ""
	I1216 21:03:21.949169   60933 logs.go:282] 0 containers: []
	W1216 21:03:21.949179   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:21.949186   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:21.949245   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:21.987741   60933 cri.go:89] found id: ""
	I1216 21:03:21.987771   60933 logs.go:282] 0 containers: []
	W1216 21:03:21.987780   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:21.987785   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:21.987839   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:22.025565   60933 cri.go:89] found id: ""
	I1216 21:03:22.025593   60933 logs.go:282] 0 containers: []
	W1216 21:03:22.025601   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:22.025607   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:22.025659   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:22.062076   60933 cri.go:89] found id: ""
	I1216 21:03:22.062110   60933 logs.go:282] 0 containers: []
	W1216 21:03:22.062120   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:22.062127   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:22.062198   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:22.102037   60933 cri.go:89] found id: ""
	I1216 21:03:22.102065   60933 logs.go:282] 0 containers: []
	W1216 21:03:22.102093   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:22.102105   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:22.102122   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:22.159185   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:22.159219   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:22.175139   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:22.175168   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:22.255769   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:22.255801   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:22.255817   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:22.339633   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:22.339681   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:19.937328   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:22.435709   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:24.436704   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:22.319227   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:24.819278   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:24.455668   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:26.956382   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:24.883865   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:24.898198   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:24.898287   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:24.939472   60933 cri.go:89] found id: ""
	I1216 21:03:24.939500   60933 logs.go:282] 0 containers: []
	W1216 21:03:24.939511   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:24.939518   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:24.939583   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:24.981798   60933 cri.go:89] found id: ""
	I1216 21:03:24.981822   60933 logs.go:282] 0 containers: []
	W1216 21:03:24.981829   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:24.981834   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:24.981889   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:25.021332   60933 cri.go:89] found id: ""
	I1216 21:03:25.021366   60933 logs.go:282] 0 containers: []
	W1216 21:03:25.021373   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:25.021379   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:25.021431   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:25.057811   60933 cri.go:89] found id: ""
	I1216 21:03:25.057836   60933 logs.go:282] 0 containers: []
	W1216 21:03:25.057843   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:25.057848   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:25.057907   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:25.093852   60933 cri.go:89] found id: ""
	I1216 21:03:25.093881   60933 logs.go:282] 0 containers: []
	W1216 21:03:25.093890   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:25.093895   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:25.093945   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:25.132779   60933 cri.go:89] found id: ""
	I1216 21:03:25.132813   60933 logs.go:282] 0 containers: []
	W1216 21:03:25.132825   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:25.132834   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:25.132912   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:25.173942   60933 cri.go:89] found id: ""
	I1216 21:03:25.173967   60933 logs.go:282] 0 containers: []
	W1216 21:03:25.173974   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:25.173990   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:25.174048   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:25.213105   60933 cri.go:89] found id: ""
	I1216 21:03:25.213127   60933 logs.go:282] 0 containers: []
	W1216 21:03:25.213135   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:25.213144   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:25.213155   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:25.267517   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:25.267557   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:25.284144   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:25.284177   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:25.362901   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:25.362931   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:25.362947   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:25.450193   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:25.450227   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:27.995716   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:28.012044   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:28.012138   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:28.050404   60933 cri.go:89] found id: ""
	I1216 21:03:28.050432   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.050441   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:28.050446   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:28.050492   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:28.087830   60933 cri.go:89] found id: ""
	I1216 21:03:28.087855   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.087862   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:28.087885   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:28.087933   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:28.125122   60933 cri.go:89] found id: ""
	I1216 21:03:28.125147   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.125154   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:28.125160   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:28.125233   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:28.160619   60933 cri.go:89] found id: ""
	I1216 21:03:28.160646   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.160655   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:28.160661   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:28.160726   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:28.198951   60933 cri.go:89] found id: ""
	I1216 21:03:28.198977   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.198986   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:28.198993   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:28.199059   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:28.236596   60933 cri.go:89] found id: ""
	I1216 21:03:28.236621   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.236629   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:28.236635   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:28.236707   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:28.273955   60933 cri.go:89] found id: ""
	I1216 21:03:28.273979   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.273986   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:28.273992   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:28.274061   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:28.311908   60933 cri.go:89] found id: ""
	I1216 21:03:28.311943   60933 logs.go:282] 0 containers: []
	W1216 21:03:28.311954   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:28.311965   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:28.311979   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:28.363870   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:28.363910   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:28.379919   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:28.379945   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:28.459998   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:28.460019   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:28.460030   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:28.543229   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:28.543306   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:26.936661   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:29.437169   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:26.820563   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:29.319981   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:28.956791   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:31.456708   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:31.086525   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:31.100833   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:31.100950   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:31.141356   60933 cri.go:89] found id: ""
	I1216 21:03:31.141385   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.141396   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:31.141403   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:31.141465   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:31.176609   60933 cri.go:89] found id: ""
	I1216 21:03:31.176641   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.176650   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:31.176657   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:31.176721   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:31.213959   60933 cri.go:89] found id: ""
	I1216 21:03:31.213984   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.213991   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:31.213997   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:31.214058   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:31.255183   60933 cri.go:89] found id: ""
	I1216 21:03:31.255208   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.255215   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:31.255220   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:31.255297   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:31.293475   60933 cri.go:89] found id: ""
	I1216 21:03:31.293501   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.293508   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:31.293514   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:31.293561   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:31.332010   60933 cri.go:89] found id: ""
	I1216 21:03:31.332041   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.332052   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:31.332061   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:31.332119   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:31.370301   60933 cri.go:89] found id: ""
	I1216 21:03:31.370331   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.370342   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:31.370349   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:31.370414   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:31.419526   60933 cri.go:89] found id: ""
	I1216 21:03:31.419553   60933 logs.go:282] 0 containers: []
	W1216 21:03:31.419561   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:31.419570   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:31.419583   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:31.480125   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:31.480160   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:31.495464   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:31.495497   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:31.570747   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:31.570773   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:31.570788   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:31.651521   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:31.651564   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:34.200969   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:34.216519   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:34.216596   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:34.254185   60933 cri.go:89] found id: ""
	I1216 21:03:34.254218   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.254227   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:34.254242   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:34.254312   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:34.293194   60933 cri.go:89] found id: ""
	I1216 21:03:34.293225   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.293236   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:34.293242   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:34.293297   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:34.335002   60933 cri.go:89] found id: ""
	I1216 21:03:34.335030   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.335042   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:34.335050   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:34.335112   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:34.370854   60933 cri.go:89] found id: ""
	I1216 21:03:34.370880   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.370887   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:34.370893   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:34.370938   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:31.439597   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:33.935941   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:31.820337   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:33.820497   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:33.955185   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:36.455713   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:34.409155   60933 cri.go:89] found id: ""
	I1216 21:03:34.409181   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.409189   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:34.409195   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:34.409256   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:34.448555   60933 cri.go:89] found id: ""
	I1216 21:03:34.448583   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.448594   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:34.448601   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:34.448663   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:34.486800   60933 cri.go:89] found id: ""
	I1216 21:03:34.486829   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.486842   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:34.486851   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:34.486919   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:34.530274   60933 cri.go:89] found id: ""
	I1216 21:03:34.530299   60933 logs.go:282] 0 containers: []
	W1216 21:03:34.530307   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:34.530317   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:34.530335   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:34.601587   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:34.601620   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:34.601637   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:34.680215   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:34.680250   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:34.721362   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:34.721389   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:34.776652   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:34.776693   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:37.292877   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:37.306976   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:37.307060   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:37.349370   60933 cri.go:89] found id: ""
	I1216 21:03:37.349405   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.349416   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:37.349424   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:37.349486   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:37.387213   60933 cri.go:89] found id: ""
	I1216 21:03:37.387271   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.387285   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:37.387294   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:37.387361   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:37.427138   60933 cri.go:89] found id: ""
	I1216 21:03:37.427164   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.427175   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:37.427182   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:37.427269   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:37.466751   60933 cri.go:89] found id: ""
	I1216 21:03:37.466776   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.466783   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:37.466788   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:37.466846   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:37.505078   60933 cri.go:89] found id: ""
	I1216 21:03:37.505115   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.505123   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:37.505128   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:37.505189   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:37.548642   60933 cri.go:89] found id: ""
	I1216 21:03:37.548665   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.548673   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:37.548679   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:37.548738   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:37.592354   60933 cri.go:89] found id: ""
	I1216 21:03:37.592379   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.592386   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:37.592391   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:37.592441   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:37.631179   60933 cri.go:89] found id: ""
	I1216 21:03:37.631212   60933 logs.go:282] 0 containers: []
	W1216 21:03:37.631221   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:37.631230   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:37.631261   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:37.683021   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:37.683062   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:37.698056   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:37.698087   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:37.774368   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:37.774397   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:37.774422   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:37.860470   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:37.860511   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:35.936409   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:37.936652   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:36.319436   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:38.819727   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:38.456251   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:40.957354   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:40.405278   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:40.420390   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:40.420473   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:40.463963   60933 cri.go:89] found id: ""
	I1216 21:03:40.463994   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.464033   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:40.464041   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:40.464107   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:40.510321   60933 cri.go:89] found id: ""
	I1216 21:03:40.510352   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.510369   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:40.510376   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:40.510441   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:40.546580   60933 cri.go:89] found id: ""
	I1216 21:03:40.546610   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.546619   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:40.546624   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:40.546686   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:40.583109   60933 cri.go:89] found id: ""
	I1216 21:03:40.583136   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.583144   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:40.583149   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:40.583202   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:40.628747   60933 cri.go:89] found id: ""
	I1216 21:03:40.628771   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.628778   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:40.628784   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:40.628845   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:40.663757   60933 cri.go:89] found id: ""
	I1216 21:03:40.663785   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.663796   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:40.663804   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:40.663867   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:40.703483   60933 cri.go:89] found id: ""
	I1216 21:03:40.703513   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.703522   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:40.703528   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:40.703592   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:40.742585   60933 cri.go:89] found id: ""
	I1216 21:03:40.742622   60933 logs.go:282] 0 containers: []
	W1216 21:03:40.742632   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:40.742641   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:40.742653   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:40.757771   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:40.757809   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:40.837615   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:40.837642   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:40.837656   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:40.915403   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:40.915442   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:40.960762   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:40.960790   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:43.515302   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:43.530831   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:43.530906   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:43.571680   60933 cri.go:89] found id: ""
	I1216 21:03:43.571704   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.571712   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:43.571718   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:43.571779   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:43.615912   60933 cri.go:89] found id: ""
	I1216 21:03:43.615940   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.615948   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:43.615955   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:43.616013   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:43.654206   60933 cri.go:89] found id: ""
	I1216 21:03:43.654231   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.654241   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:43.654249   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:43.654309   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:43.690509   60933 cri.go:89] found id: ""
	I1216 21:03:43.690533   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.690541   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:43.690548   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:43.690595   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:43.728601   60933 cri.go:89] found id: ""
	I1216 21:03:43.728627   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.728634   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:43.728639   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:43.728685   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:43.769092   60933 cri.go:89] found id: ""
	I1216 21:03:43.769130   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.769198   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:43.769215   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:43.769292   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:43.812492   60933 cri.go:89] found id: ""
	I1216 21:03:43.812525   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.812537   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:43.812544   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:43.812613   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:43.852748   60933 cri.go:89] found id: ""
	I1216 21:03:43.852778   60933 logs.go:282] 0 containers: []
	W1216 21:03:43.852787   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:43.852795   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:43.852807   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:43.907800   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:43.907839   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:43.922806   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:43.922833   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:44.002511   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:44.002538   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:44.002551   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:44.081760   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:44.081801   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:40.437134   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:42.437214   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:40.820244   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:43.321298   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:43.455891   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:45.456281   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:46.625868   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:46.640266   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:46.640341   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:46.677137   60933 cri.go:89] found id: ""
	I1216 21:03:46.677168   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.677179   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:46.677185   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:46.677241   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:46.714340   60933 cri.go:89] found id: ""
	I1216 21:03:46.714373   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.714382   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:46.714389   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:46.714449   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:46.752713   60933 cri.go:89] found id: ""
	I1216 21:03:46.752743   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.752754   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:46.752763   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:46.752827   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:46.790787   60933 cri.go:89] found id: ""
	I1216 21:03:46.790821   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.790837   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:46.790845   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:46.790902   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:46.827905   60933 cri.go:89] found id: ""
	I1216 21:03:46.827934   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.827945   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:46.827954   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:46.828023   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:46.863522   60933 cri.go:89] found id: ""
	I1216 21:03:46.863547   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.863563   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:46.863570   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:46.863634   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:46.906005   60933 cri.go:89] found id: ""
	I1216 21:03:46.906035   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.906044   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:46.906049   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:46.906103   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:46.947639   60933 cri.go:89] found id: ""
	I1216 21:03:46.947668   60933 logs.go:282] 0 containers: []
	W1216 21:03:46.947679   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:46.947691   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:46.947706   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:47.001693   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:47.001732   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:47.023122   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:47.023166   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:47.108257   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:47.108291   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:47.108303   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:47.184768   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:47.184807   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:44.940074   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:47.437155   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:45.819943   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:47.820443   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:49.820700   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:47.955794   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:49.960595   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:52.455630   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:49.729433   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:49.743836   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:49.743903   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:49.783021   60933 cri.go:89] found id: ""
	I1216 21:03:49.783054   60933 logs.go:282] 0 containers: []
	W1216 21:03:49.783066   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:49.783074   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:49.783144   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:49.820371   60933 cri.go:89] found id: ""
	I1216 21:03:49.820399   60933 logs.go:282] 0 containers: []
	W1216 21:03:49.820409   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:49.820416   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:49.820476   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:49.857918   60933 cri.go:89] found id: ""
	I1216 21:03:49.857948   60933 logs.go:282] 0 containers: []
	W1216 21:03:49.857959   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:49.857967   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:49.858033   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:49.899517   60933 cri.go:89] found id: ""
	I1216 21:03:49.899548   60933 logs.go:282] 0 containers: []
	W1216 21:03:49.899558   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:49.899565   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:49.899632   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:49.938771   60933 cri.go:89] found id: ""
	I1216 21:03:49.938797   60933 logs.go:282] 0 containers: []
	W1216 21:03:49.938805   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:49.938810   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:49.938857   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:49.975748   60933 cri.go:89] found id: ""
	I1216 21:03:49.975781   60933 logs.go:282] 0 containers: []
	W1216 21:03:49.975792   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:49.975800   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:49.975876   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:50.013057   60933 cri.go:89] found id: ""
	I1216 21:03:50.013082   60933 logs.go:282] 0 containers: []
	W1216 21:03:50.013090   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:50.013127   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:50.013178   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:50.049106   60933 cri.go:89] found id: ""
	I1216 21:03:50.049138   60933 logs.go:282] 0 containers: []
	W1216 21:03:50.049150   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:50.049161   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:50.049176   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:50.063815   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:50.063847   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:50.137801   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:50.137826   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:50.137841   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:50.218456   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:50.218495   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:50.263347   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:50.263379   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:52.824077   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:52.838096   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:52.838185   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:52.880550   60933 cri.go:89] found id: ""
	I1216 21:03:52.880582   60933 logs.go:282] 0 containers: []
	W1216 21:03:52.880593   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:52.880600   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:52.880658   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:52.919728   60933 cri.go:89] found id: ""
	I1216 21:03:52.919751   60933 logs.go:282] 0 containers: []
	W1216 21:03:52.919759   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:52.919764   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:52.919819   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:52.957519   60933 cri.go:89] found id: ""
	I1216 21:03:52.957542   60933 logs.go:282] 0 containers: []
	W1216 21:03:52.957549   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:52.957555   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:52.957607   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:52.996631   60933 cri.go:89] found id: ""
	I1216 21:03:52.996663   60933 logs.go:282] 0 containers: []
	W1216 21:03:52.996673   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:52.996681   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:52.996745   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:53.059902   60933 cri.go:89] found id: ""
	I1216 21:03:53.060014   60933 logs.go:282] 0 containers: []
	W1216 21:03:53.060030   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:53.060039   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:53.060105   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:53.099367   60933 cri.go:89] found id: ""
	I1216 21:03:53.099395   60933 logs.go:282] 0 containers: []
	W1216 21:03:53.099406   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:53.099419   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:53.099486   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:53.140668   60933 cri.go:89] found id: ""
	I1216 21:03:53.140696   60933 logs.go:282] 0 containers: []
	W1216 21:03:53.140704   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:53.140709   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:53.140777   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:53.179182   60933 cri.go:89] found id: ""
	I1216 21:03:53.179208   60933 logs.go:282] 0 containers: []
	W1216 21:03:53.179216   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:53.179225   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:53.179236   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:53.233441   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:53.233481   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:53.247526   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:53.247569   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:53.321868   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:53.321895   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:53.321911   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:53.410904   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:53.410959   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:49.936523   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:51.936955   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:54.441538   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:52.319658   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:54.319887   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:54.955490   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:57.456080   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:55.954371   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:55.968506   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:55.968570   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:56.005087   60933 cri.go:89] found id: ""
	I1216 21:03:56.005118   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.005130   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:56.005137   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:56.005205   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:56.039443   60933 cri.go:89] found id: ""
	I1216 21:03:56.039467   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.039475   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:56.039486   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:56.039537   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:56.078181   60933 cri.go:89] found id: ""
	I1216 21:03:56.078213   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.078224   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:56.078231   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:56.078289   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:56.115809   60933 cri.go:89] found id: ""
	I1216 21:03:56.115833   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.115841   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:56.115848   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:56.115901   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:56.154299   60933 cri.go:89] found id: ""
	I1216 21:03:56.154323   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.154330   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:56.154336   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:56.154395   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:56.193069   60933 cri.go:89] found id: ""
	I1216 21:03:56.193098   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.193106   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:56.193112   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:56.193161   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:56.231067   60933 cri.go:89] found id: ""
	I1216 21:03:56.231099   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.231118   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:56.231125   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:56.231191   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:56.270980   60933 cri.go:89] found id: ""
	I1216 21:03:56.271011   60933 logs.go:282] 0 containers: []
	W1216 21:03:56.271022   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:56.271035   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:56.271050   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:03:56.321374   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:56.321405   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:56.336802   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:56.336847   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:03:56.414052   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:56.414078   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:56.414091   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:56.499118   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:56.499158   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:59.049386   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:03:59.063191   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:03:59.063300   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:03:59.102136   60933 cri.go:89] found id: ""
	I1216 21:03:59.102169   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.102180   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:03:59.102187   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:03:59.102255   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:03:59.138311   60933 cri.go:89] found id: ""
	I1216 21:03:59.138340   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.138357   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:03:59.138364   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:03:59.138431   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:03:59.176131   60933 cri.go:89] found id: ""
	I1216 21:03:59.176159   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.176169   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:03:59.176177   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:03:59.176259   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:03:59.214274   60933 cri.go:89] found id: ""
	I1216 21:03:59.214308   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.214320   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:03:59.214327   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:03:59.214397   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:03:59.254499   60933 cri.go:89] found id: ""
	I1216 21:03:59.254524   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.254531   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:03:59.254537   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:03:59.254602   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:03:59.292715   60933 cri.go:89] found id: ""
	I1216 21:03:59.292755   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.292765   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:03:59.292772   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:03:59.292836   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:03:59.333279   60933 cri.go:89] found id: ""
	I1216 21:03:59.333314   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.333325   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:03:59.333332   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:03:59.333404   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:03:59.372071   60933 cri.go:89] found id: ""
	I1216 21:03:59.372104   60933 logs.go:282] 0 containers: []
	W1216 21:03:59.372116   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:03:59.372126   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:03:59.372143   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:03:59.389021   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:03:59.389051   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 21:03:56.936508   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:59.438217   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:56.323300   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:58.819599   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:03:59.456242   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:01.956873   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	W1216 21:03:59.503281   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:03:59.503304   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:03:59.503316   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:03:59.581761   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:03:59.581797   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:03:59.627604   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:03:59.627646   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:04:02.179425   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:04:02.195786   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:04:02.195850   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:04:02.239763   60933 cri.go:89] found id: ""
	I1216 21:04:02.239790   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.239801   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:04:02.239809   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:04:02.239873   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:04:02.278885   60933 cri.go:89] found id: ""
	I1216 21:04:02.278914   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.278926   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:04:02.278935   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:04:02.279004   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:04:02.320701   60933 cri.go:89] found id: ""
	I1216 21:04:02.320731   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.320742   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:04:02.320749   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:04:02.320811   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:04:02.357726   60933 cri.go:89] found id: ""
	I1216 21:04:02.357757   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.357767   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:04:02.357773   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:04:02.357826   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:04:02.399577   60933 cri.go:89] found id: ""
	I1216 21:04:02.399609   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.399618   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:04:02.399624   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:04:02.399687   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:04:02.445559   60933 cri.go:89] found id: ""
	I1216 21:04:02.445590   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.445600   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:04:02.445607   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:04:02.445670   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:04:02.482983   60933 cri.go:89] found id: ""
	I1216 21:04:02.483015   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.483027   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:04:02.483035   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:04:02.483116   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:04:02.523028   60933 cri.go:89] found id: ""
	I1216 21:04:02.523055   60933 logs.go:282] 0 containers: []
	W1216 21:04:02.523063   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:04:02.523073   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:04:02.523084   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:04:02.577447   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:04:02.577487   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:04:02.594539   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:04:02.594567   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:04:02.683805   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:04:02.683832   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:04:02.683848   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:04:02.763377   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:04:02.763416   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:04:01.937214   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:04.436771   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:01.319860   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:03.320323   60829 pod_ready.go:103] pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:04.454654   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:06.456145   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:05.311029   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:04:05.328358   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:04:05.328438   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:04:05.367378   60933 cri.go:89] found id: ""
	I1216 21:04:05.367402   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.367409   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:04:05.367419   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:04:05.367468   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:04:05.406268   60933 cri.go:89] found id: ""
	I1216 21:04:05.406291   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.406301   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:04:05.406306   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:04:05.406353   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:04:05.444737   60933 cri.go:89] found id: ""
	I1216 21:04:05.444767   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.444778   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:04:05.444787   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:04:05.444836   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:04:05.484044   60933 cri.go:89] found id: ""
	I1216 21:04:05.484132   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.484153   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:04:05.484161   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:04:05.484222   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:04:05.523395   60933 cri.go:89] found id: ""
	I1216 21:04:05.523420   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.523431   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:04:05.523439   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:04:05.523501   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:04:05.566925   60933 cri.go:89] found id: ""
	I1216 21:04:05.566954   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.566967   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:04:05.566974   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:04:05.567036   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:04:05.611275   60933 cri.go:89] found id: ""
	I1216 21:04:05.611303   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.611314   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:04:05.611321   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:04:05.611396   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:04:05.650340   60933 cri.go:89] found id: ""
	I1216 21:04:05.650371   60933 logs.go:282] 0 containers: []
	W1216 21:04:05.650379   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:04:05.650389   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:04:05.650400   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:04:05.702277   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:04:05.702321   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 21:04:05.718685   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:04:05.718713   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:04:05.794979   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:04:05.795005   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:04:05.795020   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:04:05.897348   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:04:05.897383   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:04:08.447268   60933 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:04:08.462553   60933 kubeadm.go:597] duration metric: took 4m2.545161532s to restartPrimaryControlPlane
	W1216 21:04:08.462621   60933 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 21:04:08.462650   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 21:04:06.437699   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:08.936904   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:05.813413   60829 pod_ready.go:82] duration metric: took 4m0.000648161s for pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace to be "Ready" ...
	E1216 21:04:05.813448   60829 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-hlt7s" in "kube-system" namespace to be "Ready" (will not retry!)
	I1216 21:04:05.813472   60829 pod_ready.go:39] duration metric: took 4m14.577422135s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:04:05.813498   60829 kubeadm.go:597] duration metric: took 4m22.010606819s to restartPrimaryControlPlane
	W1216 21:04:05.813559   60829 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 21:04:05.813593   60829 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 21:04:10.315541   60933 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.85286561s)
	I1216 21:04:10.315622   60933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:04:10.330937   60933 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 21:04:10.343702   60933 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:04:10.356498   60933 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:04:10.356526   60933 kubeadm.go:157] found existing configuration files:
	
	I1216 21:04:10.356579   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 21:04:10.367777   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:04:10.367847   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:04:10.379109   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 21:04:10.389258   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:04:10.389313   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:04:10.399959   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 21:04:10.410664   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:04:10.410734   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:04:10.423138   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 21:04:10.433922   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:04:10.433976   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:04:10.445297   60933 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 21:04:10.524236   60933 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1216 21:04:10.524344   60933 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 21:04:10.680331   60933 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 21:04:10.680489   60933 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 21:04:10.680641   60933 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1216 21:04:10.877305   60933 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 21:04:10.879375   60933 out.go:235]   - Generating certificates and keys ...
	I1216 21:04:10.879496   60933 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 21:04:10.879567   60933 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 21:04:10.879647   60933 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 21:04:10.879748   60933 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 21:04:10.879865   60933 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 21:04:10.880127   60933 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 21:04:10.881047   60933 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 21:04:10.881874   60933 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 21:04:10.882778   60933 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 21:04:10.883678   60933 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 21:04:10.884029   60933 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 21:04:10.884130   60933 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 21:04:11.034011   60933 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 21:04:11.273509   60933 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 21:04:11.477553   60933 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 21:04:11.542158   60933 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 21:04:11.565791   60933 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 21:04:11.567317   60933 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 21:04:11.567409   60933 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 21:04:11.763223   60933 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 21:04:08.955135   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:10.957061   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:11.766107   60933 out.go:235]   - Booting up control plane ...
	I1216 21:04:11.766257   60933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 21:04:11.766367   60933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 21:04:11.768484   60933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 21:04:11.773601   60933 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 21:04:11.780554   60933 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1216 21:04:11.436931   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:13.437532   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:13.455175   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:15.455370   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:17.456801   60421 pod_ready.go:103] pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:15.936107   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:17.937233   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:17.949449   60421 pod_ready.go:82] duration metric: took 4m0.000885381s for pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace to be "Ready" ...
	E1216 21:04:17.949484   60421 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-5xf67" in "kube-system" namespace to be "Ready" (will not retry!)
	I1216 21:04:17.949501   60421 pod_ready.go:39] duration metric: took 4m10.554596731s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:04:17.949525   60421 kubeadm.go:597] duration metric: took 4m42.414672113s to restartPrimaryControlPlane
	W1216 21:04:17.949588   60421 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 21:04:17.949619   60421 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 21:04:19.938104   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:22.436710   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:24.936550   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:26.936809   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:29.437478   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:33.833179   60829 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (28.019561403s)
	I1216 21:04:33.833265   60829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:04:33.850170   60829 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 21:04:33.862112   60829 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:04:33.873752   60829 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:04:33.873777   60829 kubeadm.go:157] found existing configuration files:
	
	I1216 21:04:33.873832   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1216 21:04:33.885038   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:04:33.885115   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:04:33.897352   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1216 21:04:33.911055   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:04:33.911115   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:04:33.925077   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1216 21:04:33.938925   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:04:33.938997   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:04:33.952022   60829 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1216 21:04:33.963099   60829 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:04:33.963176   60829 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:04:33.974080   60829 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 21:04:34.031525   60829 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I1216 21:04:34.031643   60829 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 21:04:34.153173   60829 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 21:04:34.153340   60829 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 21:04:34.153453   60829 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 21:04:34.166258   60829 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 21:04:31.936620   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:33.938157   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:34.168275   60829 out.go:235]   - Generating certificates and keys ...
	I1216 21:04:34.168388   60829 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 21:04:34.168463   60829 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 21:04:34.168545   60829 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 21:04:34.168633   60829 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 21:04:34.168740   60829 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 21:04:34.168837   60829 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 21:04:34.168934   60829 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 21:04:34.169020   60829 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 21:04:34.169119   60829 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 21:04:34.169222   60829 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 21:04:34.169278   60829 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 21:04:34.169365   60829 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 21:04:34.277660   60829 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 21:04:34.526364   60829 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 21:04:34.629728   60829 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 21:04:34.757824   60829 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 21:04:34.838922   60829 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 21:04:34.839431   60829 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 21:04:34.841925   60829 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 21:04:34.843761   60829 out.go:235]   - Booting up control plane ...
	I1216 21:04:34.843874   60829 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 21:04:34.843945   60829 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 21:04:34.846919   60829 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 21:04:34.866038   60829 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 21:04:34.875031   60829 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 21:04:34.875112   60829 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 21:04:35.016713   60829 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 21:04:35.016879   60829 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 21:04:36.437043   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:38.437584   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:36.017947   60829 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001159452s
	I1216 21:04:36.018086   60829 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1216 21:04:40.519460   60829 kubeadm.go:310] [api-check] The API server is healthy after 4.501460025s
	I1216 21:04:40.533680   60829 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 21:04:40.552611   60829 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 21:04:40.585691   60829 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 21:04:40.585905   60829 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-327790 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 21:04:40.613752   60829 kubeadm.go:310] [bootstrap-token] Using token: w829op.p4bszg1q76emsxit
	I1216 21:04:40.615428   60829 out.go:235]   - Configuring RBAC rules ...
	I1216 21:04:40.615556   60829 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 21:04:40.629296   60829 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 21:04:40.638449   60829 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 21:04:40.644143   60829 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 21:04:40.648665   60829 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 21:04:40.653151   60829 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 21:04:40.926399   60829 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 21:04:41.370569   60829 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1216 21:04:41.927555   60829 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1216 21:04:41.928692   60829 kubeadm.go:310] 
	I1216 21:04:41.928769   60829 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1216 21:04:41.928779   60829 kubeadm.go:310] 
	I1216 21:04:41.928851   60829 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1216 21:04:41.928878   60829 kubeadm.go:310] 
	I1216 21:04:41.928928   60829 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1216 21:04:41.929005   60829 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 21:04:41.929053   60829 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 21:04:41.929060   60829 kubeadm.go:310] 
	I1216 21:04:41.929107   60829 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1216 21:04:41.929114   60829 kubeadm.go:310] 
	I1216 21:04:41.929151   60829 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 21:04:41.929157   60829 kubeadm.go:310] 
	I1216 21:04:41.929205   60829 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1216 21:04:41.929264   60829 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 21:04:41.929325   60829 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 21:04:41.929354   60829 kubeadm.go:310] 
	I1216 21:04:41.929527   60829 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 21:04:41.929657   60829 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1216 21:04:41.929674   60829 kubeadm.go:310] 
	I1216 21:04:41.929787   60829 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token w829op.p4bszg1q76emsxit \
	I1216 21:04:41.929941   60829 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e03b60b144334bf383a3d22daeca854a6b4004373f1847ba3afcb85a998b5735 \
	I1216 21:04:41.929975   60829 kubeadm.go:310] 	--control-plane 
	I1216 21:04:41.929984   60829 kubeadm.go:310] 
	I1216 21:04:41.930103   60829 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1216 21:04:41.930124   60829 kubeadm.go:310] 
	I1216 21:04:41.930245   60829 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token w829op.p4bszg1q76emsxit \
	I1216 21:04:41.930378   60829 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e03b60b144334bf383a3d22daeca854a6b4004373f1847ba3afcb85a998b5735 
	I1216 21:04:41.931554   60829 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 21:04:41.931685   60829 cni.go:84] Creating CNI manager for ""
	I1216 21:04:41.931699   60829 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:04:41.933748   60829 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 21:04:40.937882   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:43.436864   60215 pod_ready.go:103] pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:41.935317   60829 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 21:04:41.947502   60829 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 21:04:41.976180   60829 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 21:04:41.976288   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:41.976323   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-327790 minikube.k8s.io/updated_at=2024_12_16T21_04_41_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=74e51ab701402ddc00f8ba70f2a2775c7dcd6477 minikube.k8s.io/name=default-k8s-diff-port-327790 minikube.k8s.io/primary=true
	I1216 21:04:42.010154   60829 ops.go:34] apiserver oom_adj: -16
	I1216 21:04:42.181919   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:42.682201   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:43.182557   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:43.682418   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:44.182318   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:44.682793   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:45.182342   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:45.682678   60829 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:45.777484   60829 kubeadm.go:1113] duration metric: took 3.801254961s to wait for elevateKubeSystemPrivileges
	I1216 21:04:45.777522   60829 kubeadm.go:394] duration metric: took 5m2.030533321s to StartCluster
	I1216 21:04:45.777543   60829 settings.go:142] acquiring lock: {Name:mke62e1d1fa6bfae09410847a3fc6f95d0bbbd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:04:45.777644   60829 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 21:04:45.780034   60829 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/kubeconfig: {Name:mk67073c6dc9abd712825d4490d6430745897f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:04:45.780369   60829 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.162 Port:8444 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 21:04:45.780450   60829 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 21:04:45.780566   60829 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-327790"
	I1216 21:04:45.780579   60829 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-327790"
	I1216 21:04:45.780595   60829 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-327790"
	W1216 21:04:45.780606   60829 addons.go:243] addon storage-provisioner should already be in state true
	I1216 21:04:45.780599   60829 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-327790"
	I1216 21:04:45.780609   60829 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-327790"
	I1216 21:04:45.780627   60829 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-327790"
	I1216 21:04:45.780627   60829 config.go:182] Loaded profile config "default-k8s-diff-port-327790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	W1216 21:04:45.780638   60829 addons.go:243] addon metrics-server should already be in state true
	I1216 21:04:45.780648   60829 host.go:66] Checking if "default-k8s-diff-port-327790" exists ...
	I1216 21:04:45.780675   60829 host.go:66] Checking if "default-k8s-diff-port-327790" exists ...
	I1216 21:04:45.781091   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:45.781091   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:45.781132   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:45.781136   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:45.781174   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:45.781137   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:45.782022   60829 out.go:177] * Verifying Kubernetes components...
	I1216 21:04:45.783549   60829 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 21:04:45.799326   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42295
	I1216 21:04:45.799443   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36833
	I1216 21:04:45.799865   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:45.800491   60829 main.go:141] libmachine: Using API Version  1
	I1216 21:04:45.800510   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:45.800588   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:45.801082   60829 main.go:141] libmachine: Using API Version  1
	I1216 21:04:45.801102   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:45.801178   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37413
	I1216 21:04:45.801202   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:45.801517   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:45.801539   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:45.801707   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetState
	I1216 21:04:45.801925   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:45.801959   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:45.801974   60829 main.go:141] libmachine: Using API Version  1
	I1216 21:04:45.801992   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:45.802319   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:45.802817   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:45.802857   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:45.805750   60829 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-327790"
	W1216 21:04:45.805775   60829 addons.go:243] addon default-storageclass should already be in state true
	I1216 21:04:45.805806   60829 host.go:66] Checking if "default-k8s-diff-port-327790" exists ...
	I1216 21:04:45.806153   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:45.806193   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:45.820545   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46261
	I1216 21:04:45.821062   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:45.821598   60829 main.go:141] libmachine: Using API Version  1
	I1216 21:04:45.821625   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:45.822086   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:45.822294   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetState
	I1216 21:04:45.823995   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 21:04:45.824775   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40091
	I1216 21:04:45.825269   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:45.825754   60829 main.go:141] libmachine: Using API Version  1
	I1216 21:04:45.825778   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:45.826117   60829 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1216 21:04:45.826158   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:45.826843   60829 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:45.826892   60829 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:45.827527   60829 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1216 21:04:45.827557   60829 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1216 21:04:45.827577   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 21:04:45.829352   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34899
	I1216 21:04:45.829769   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:45.830197   60829 main.go:141] libmachine: Using API Version  1
	I1216 21:04:45.830217   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:45.830543   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:45.830767   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetState
	I1216 21:04:45.831413   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 21:04:45.832010   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 21:04:45.832030   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 21:04:45.832202   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 21:04:45.832424   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 21:04:45.832496   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 21:04:45.832847   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 21:04:45.833056   60829 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 21:04:45.834475   60829 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 21:04:45.835944   60829 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 21:04:45.835965   60829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 21:04:45.835983   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 21:04:45.839118   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 21:04:45.839533   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 21:04:45.839560   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 21:04:45.839744   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 21:04:45.839947   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 21:04:45.840087   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 21:04:45.840218   60829 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 21:04:45.845365   60829 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37995
	I1216 21:04:45.845925   60829 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:45.847042   60829 main.go:141] libmachine: Using API Version  1
	I1216 21:04:45.847060   60829 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:45.847450   60829 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:45.847669   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetState
	I1216 21:04:45.849934   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .DriverName
	I1216 21:04:45.850165   60829 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 21:04:45.850182   60829 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 21:04:45.850199   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHHostname
	I1216 21:04:45.853083   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 21:04:45.853493   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:68:47:d5", ip: ""} in network mk-default-k8s-diff-port-327790: {Iface:virbr1 ExpiryTime:2024-12-16 21:59:29 +0000 UTC Type:0 Mac:52:54:00:68:47:d5 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:default-k8s-diff-port-327790 Clientid:01:52:54:00:68:47:d5}
	I1216 21:04:45.853518   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | domain default-k8s-diff-port-327790 has defined IP address 192.168.39.162 and MAC address 52:54:00:68:47:d5 in network mk-default-k8s-diff-port-327790
	I1216 21:04:45.853679   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHPort
	I1216 21:04:45.853848   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHKeyPath
	I1216 21:04:45.854024   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .GetSSHUsername
	I1216 21:04:45.854177   60829 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/default-k8s-diff-port-327790/id_rsa Username:docker}
	I1216 21:04:45.978935   60829 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 21:04:46.010601   60829 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-327790" to be "Ready" ...
	I1216 21:04:46.019674   60829 node_ready.go:49] node "default-k8s-diff-port-327790" has status "Ready":"True"
	I1216 21:04:46.019704   60829 node_ready.go:38] duration metric: took 9.066576ms for node "default-k8s-diff-port-327790" to be "Ready" ...
	I1216 21:04:46.019715   60829 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:04:46.033957   60829 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:46.103779   60829 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1216 21:04:46.103812   60829 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1216 21:04:46.120299   60829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 21:04:46.171131   60829 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1216 21:04:46.171171   60829 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1216 21:04:46.171280   60829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 21:04:46.244556   60829 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 21:04:46.244587   60829 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1216 21:04:46.332646   60829 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 21:04:47.461793   60829 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.34145582s)
	I1216 21:04:47.461871   60829 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.129193295s)
	I1216 21:04:47.461793   60829 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.290486436s)
	I1216 21:04:47.461899   60829 main.go:141] libmachine: Making call to close driver server
	I1216 21:04:47.461913   60829 main.go:141] libmachine: Making call to close driver server
	I1216 21:04:47.461918   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Close
	I1216 21:04:47.461875   60829 main.go:141] libmachine: Making call to close driver server
	I1216 21:04:47.461982   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Close
	I1216 21:04:47.461927   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Close
	I1216 21:04:47.462463   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Closing plugin on server side
	I1216 21:04:47.462469   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Closing plugin on server side
	I1216 21:04:47.462480   60829 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:04:47.462488   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Closing plugin on server side
	I1216 21:04:47.462494   60829 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:04:47.462504   60829 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:04:47.462506   60829 main.go:141] libmachine: Making call to close driver server
	I1216 21:04:47.462511   60829 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:04:47.462516   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Close
	I1216 21:04:47.462521   60829 main.go:141] libmachine: Making call to close driver server
	I1216 21:04:47.462529   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Close
	I1216 21:04:47.462556   60829 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:04:47.462573   60829 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:04:47.462581   60829 main.go:141] libmachine: Making call to close driver server
	I1216 21:04:47.462588   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Close
	I1216 21:04:47.462805   60829 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:04:47.462816   60829 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:04:47.462816   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Closing plugin on server side
	I1216 21:04:47.462827   60829 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-327790"
	I1216 21:04:47.462841   60829 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:04:47.462848   60829 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:04:47.463049   60829 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:04:47.463067   60829 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:04:47.524466   60829 main.go:141] libmachine: Making call to close driver server
	I1216 21:04:47.524497   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) Calling .Close
	I1216 21:04:47.524822   60829 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:04:47.524843   60829 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:04:47.524869   60829 main.go:141] libmachine: (default-k8s-diff-port-327790) DBG | Closing plugin on server side
	I1216 21:04:47.526679   60829 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I1216 21:04:45.861404   60421 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.911759863s)
	I1216 21:04:45.861483   60421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:04:45.889560   60421 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 21:04:45.922090   60421 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:04:45.945227   60421 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:04:45.945261   60421 kubeadm.go:157] found existing configuration files:
	
	I1216 21:04:45.945306   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 21:04:45.960594   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:04:45.960666   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:04:45.980613   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 21:04:46.005349   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:04:46.005431   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:04:46.021683   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 21:04:46.032967   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:04:46.033042   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:04:46.064718   60421 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 21:04:46.078736   60421 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:04:46.078805   60421 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:04:46.092798   60421 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 21:04:46.293434   60421 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 21:04:45.430910   60215 pod_ready.go:82] duration metric: took 4m0.000948437s for pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace to be "Ready" ...
	E1216 21:04:45.430950   60215 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-f79f97bbb-d6gmd" in "kube-system" namespace to be "Ready" (will not retry!)
	I1216 21:04:45.430970   60215 pod_ready.go:39] duration metric: took 4m12.926677248s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:04:45.431002   60215 kubeadm.go:597] duration metric: took 4m20.847109652s to restartPrimaryControlPlane
	W1216 21:04:45.431059   60215 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 21:04:45.431092   60215 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 21:04:47.527909   60829 addons.go:510] duration metric: took 1.747463467s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I1216 21:04:48.047956   60829 pod_ready.go:103] pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:51.781856   60933 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1216 21:04:51.782285   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:04:51.782543   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:04:54.704462   60421 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I1216 21:04:54.704514   60421 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 21:04:54.704600   60421 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 21:04:54.704736   60421 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 21:04:54.704839   60421 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 21:04:54.704894   60421 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 21:04:54.706650   60421 out.go:235]   - Generating certificates and keys ...
	I1216 21:04:54.706771   60421 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 21:04:54.706865   60421 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 21:04:54.706999   60421 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 21:04:54.707113   60421 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 21:04:54.707256   60421 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 21:04:54.707344   60421 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 21:04:54.707478   60421 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 21:04:54.707573   60421 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 21:04:54.707683   60421 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 21:04:54.707806   60421 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 21:04:54.707851   60421 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 21:04:54.707902   60421 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 21:04:54.707968   60421 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 21:04:54.708056   60421 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 21:04:54.708127   60421 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 21:04:54.708225   60421 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 21:04:54.708305   60421 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 21:04:54.708427   60421 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 21:04:54.708526   60421 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 21:04:54.710014   60421 out.go:235]   - Booting up control plane ...
	I1216 21:04:54.710113   60421 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 21:04:54.710197   60421 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 21:04:54.710254   60421 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 21:04:54.710361   60421 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 21:04:54.710457   60421 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 21:04:54.710511   60421 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 21:04:54.710670   60421 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 21:04:54.710792   60421 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 21:04:54.710852   60421 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.532878ms
	I1216 21:04:54.710912   60421 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1216 21:04:54.710982   60421 kubeadm.go:310] [api-check] The API server is healthy after 5.50189872s
	I1216 21:04:54.711125   60421 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 21:04:54.711281   60421 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 21:04:54.711362   60421 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 21:04:54.711618   60421 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-232338 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 21:04:54.711712   60421 kubeadm.go:310] [bootstrap-token] Using token: knn1cl.i9horbjuutctjfyf
	I1216 21:04:54.714363   60421 out.go:235]   - Configuring RBAC rules ...
	I1216 21:04:54.714488   60421 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 21:04:54.714560   60421 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 21:04:54.714674   60421 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 21:04:54.714820   60421 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 21:04:54.714914   60421 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 21:04:54.714981   60421 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 21:04:54.715083   60421 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 21:04:54.715159   60421 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1216 21:04:54.715228   60421 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1216 21:04:54.715237   60421 kubeadm.go:310] 
	I1216 21:04:54.715345   60421 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1216 21:04:54.715359   60421 kubeadm.go:310] 
	I1216 21:04:54.715455   60421 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1216 21:04:54.715463   60421 kubeadm.go:310] 
	I1216 21:04:54.715510   60421 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1216 21:04:54.715596   60421 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 21:04:54.715669   60421 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 21:04:54.715679   60421 kubeadm.go:310] 
	I1216 21:04:54.715767   60421 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1216 21:04:54.715775   60421 kubeadm.go:310] 
	I1216 21:04:54.715842   60421 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 21:04:54.715851   60421 kubeadm.go:310] 
	I1216 21:04:54.715908   60421 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1216 21:04:54.715969   60421 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 21:04:54.716026   60421 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 21:04:54.716032   60421 kubeadm.go:310] 
	I1216 21:04:54.716106   60421 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 21:04:54.716171   60421 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1216 21:04:54.716177   60421 kubeadm.go:310] 
	I1216 21:04:54.716240   60421 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token knn1cl.i9horbjuutctjfyf \
	I1216 21:04:54.716340   60421 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e03b60b144334bf383a3d22daeca854a6b4004373f1847ba3afcb85a998b5735 \
	I1216 21:04:54.716374   60421 kubeadm.go:310] 	--control-plane 
	I1216 21:04:54.716384   60421 kubeadm.go:310] 
	I1216 21:04:54.716457   60421 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1216 21:04:54.716467   60421 kubeadm.go:310] 
	I1216 21:04:54.716534   60421 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token knn1cl.i9horbjuutctjfyf \
	I1216 21:04:54.716634   60421 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e03b60b144334bf383a3d22daeca854a6b4004373f1847ba3afcb85a998b5735 
	I1216 21:04:54.716644   60421 cni.go:84] Creating CNI manager for ""
	I1216 21:04:54.716651   60421 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:04:54.718260   60421 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 21:04:50.542207   60829 pod_ready.go:103] pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:52.542453   60829 pod_ready.go:103] pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:55.040960   60829 pod_ready.go:103] pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"False"
	I1216 21:04:56.042145   60829 pod_ready.go:93] pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:04:56.042175   60829 pod_ready.go:82] duration metric: took 10.008191514s for pod "etcd-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:56.042192   60829 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:56.047996   60829 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:04:56.048022   60829 pod_ready.go:82] duration metric: took 5.821217ms for pod "kube-apiserver-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:56.048031   60829 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:56.052582   60829 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:04:56.052608   60829 pod_ready.go:82] duration metric: took 4.569092ms for pod "kube-controller-manager-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:56.052619   60829 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:56.056805   60829 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-327790" in "kube-system" namespace has status "Ready":"True"
	I1216 21:04:56.056834   60829 pod_ready.go:82] duration metric: took 4.206726ms for pod "kube-scheduler-default-k8s-diff-port-327790" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:56.056841   60829 pod_ready.go:39] duration metric: took 10.037112061s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:04:56.056855   60829 api_server.go:52] waiting for apiserver process to appear ...
	I1216 21:04:56.056904   60829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:04:56.076993   60829 api_server.go:72] duration metric: took 10.296578804s to wait for apiserver process to appear ...
	I1216 21:04:56.077023   60829 api_server.go:88] waiting for apiserver healthz status ...
	I1216 21:04:56.077045   60829 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8444/healthz ...
	I1216 21:04:56.082250   60829 api_server.go:279] https://192.168.39.162:8444/healthz returned 200:
	ok
	I1216 21:04:56.083348   60829 api_server.go:141] control plane version: v1.32.0
	I1216 21:04:56.083369   60829 api_server.go:131] duration metric: took 6.339438ms to wait for apiserver health ...
	I1216 21:04:56.083377   60829 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 21:04:56.090255   60829 system_pods.go:59] 9 kube-system pods found
	I1216 21:04:56.090290   60829 system_pods.go:61] "coredns-668d6bf9bc-2qcfx" [4ac98efa-96ff-4564-93de-4a61de7a6507] Running
	I1216 21:04:56.090302   60829 system_pods.go:61] "coredns-668d6bf9bc-fb7wx" [f2f2c0e7-893f-45ba-8da9-3b03f5560d89] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:04:56.090310   60829 system_pods.go:61] "etcd-default-k8s-diff-port-327790" [5363e160-ef78-4737-89f9-5f4d0f0eab95] Running
	I1216 21:04:56.090318   60829 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-327790" [b53c6be6-e476-4a5a-80c2-96e701736820] Running
	I1216 21:04:56.090324   60829 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-327790" [57d8747a-7258-48c3-9bcd-6fedaa8b7431] Running
	I1216 21:04:56.090329   60829 system_pods.go:61] "kube-proxy-njqp8" [e5f1789d-b343-4c2e-b078-4a15f4b18569] Running
	I1216 21:04:56.090334   60829 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-327790" [e2303bbd-b9d9-4392-867f-6f5f43f74826] Running
	I1216 21:04:56.090342   60829 system_pods.go:61] "metrics-server-f79f97bbb-84xtf" [569c6717-dc12-474f-8156-d2dd9e410a54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:04:56.090349   60829 system_pods.go:61] "storage-provisioner" [4e5b12f0-3d96-4dd0-81e7-300b82058d47] Running
	I1216 21:04:56.090360   60829 system_pods.go:74] duration metric: took 6.975795ms to wait for pod list to return data ...
	I1216 21:04:56.090373   60829 default_sa.go:34] waiting for default service account to be created ...
	I1216 21:04:56.093967   60829 default_sa.go:45] found service account: "default"
	I1216 21:04:56.093998   60829 default_sa.go:55] duration metric: took 3.616693ms for default service account to be created ...
	I1216 21:04:56.094010   60829 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 21:04:56.241532   60829 system_pods.go:86] 9 kube-system pods found
	I1216 21:04:56.241568   60829 system_pods.go:89] "coredns-668d6bf9bc-2qcfx" [4ac98efa-96ff-4564-93de-4a61de7a6507] Running
	I1216 21:04:56.241582   60829 system_pods.go:89] "coredns-668d6bf9bc-fb7wx" [f2f2c0e7-893f-45ba-8da9-3b03f5560d89] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:04:56.241589   60829 system_pods.go:89] "etcd-default-k8s-diff-port-327790" [5363e160-ef78-4737-89f9-5f4d0f0eab95] Running
	I1216 21:04:56.241597   60829 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-327790" [b53c6be6-e476-4a5a-80c2-96e701736820] Running
	I1216 21:04:56.241605   60829 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-327790" [57d8747a-7258-48c3-9bcd-6fedaa8b7431] Running
	I1216 21:04:56.241611   60829 system_pods.go:89] "kube-proxy-njqp8" [e5f1789d-b343-4c2e-b078-4a15f4b18569] Running
	I1216 21:04:56.241617   60829 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-327790" [e2303bbd-b9d9-4392-867f-6f5f43f74826] Running
	I1216 21:04:56.241624   60829 system_pods.go:89] "metrics-server-f79f97bbb-84xtf" [569c6717-dc12-474f-8156-d2dd9e410a54] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:04:56.241629   60829 system_pods.go:89] "storage-provisioner" [4e5b12f0-3d96-4dd0-81e7-300b82058d47] Running
	I1216 21:04:56.241639   60829 system_pods.go:126] duration metric: took 147.621114ms to wait for k8s-apps to be running ...
	I1216 21:04:56.241656   60829 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 21:04:56.241730   60829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:04:56.258891   60829 system_svc.go:56] duration metric: took 17.227056ms WaitForService to wait for kubelet
	I1216 21:04:56.258935   60829 kubeadm.go:582] duration metric: took 10.478521255s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 21:04:56.258962   60829 node_conditions.go:102] verifying NodePressure condition ...
	I1216 21:04:56.438641   60829 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 21:04:56.438667   60829 node_conditions.go:123] node cpu capacity is 2
	I1216 21:04:56.438679   60829 node_conditions.go:105] duration metric: took 179.711624ms to run NodePressure ...
	I1216 21:04:56.438692   60829 start.go:241] waiting for startup goroutines ...
	I1216 21:04:56.438700   60829 start.go:246] waiting for cluster config update ...
	I1216 21:04:56.438714   60829 start.go:255] writing updated cluster config ...
	I1216 21:04:56.438975   60829 ssh_runner.go:195] Run: rm -f paused
	I1216 21:04:56.490195   60829 start.go:600] kubectl: 1.32.0, cluster: 1.32.0 (minor skew: 0)
	I1216 21:04:56.492395   60829 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-327790" cluster and "default" namespace by default
	I1216 21:04:54.719483   60421 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 21:04:54.732035   60421 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 21:04:54.754010   60421 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 21:04:54.754122   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:54.754177   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-232338 minikube.k8s.io/updated_at=2024_12_16T21_04_54_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=74e51ab701402ddc00f8ba70f2a2775c7dcd6477 minikube.k8s.io/name=no-preload-232338 minikube.k8s.io/primary=true
	I1216 21:04:54.773008   60421 ops.go:34] apiserver oom_adj: -16
	I1216 21:04:55.009573   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:55.510039   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:56.009645   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:56.509608   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:57.009714   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:57.509902   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:58.009901   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:58.509631   60421 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:04:58.632896   60421 kubeadm.go:1113] duration metric: took 3.878846316s to wait for elevateKubeSystemPrivileges
	I1216 21:04:58.632933   60421 kubeadm.go:394] duration metric: took 5m23.15322559s to StartCluster
	I1216 21:04:58.632951   60421 settings.go:142] acquiring lock: {Name:mke62e1d1fa6bfae09410847a3fc6f95d0bbbd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:04:58.633031   60421 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 21:04:58.635409   60421 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/kubeconfig: {Name:mk67073c6dc9abd712825d4490d6430745897f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:04:58.635720   60421 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.240 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 21:04:58.635835   60421 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 21:04:58.635944   60421 addons.go:69] Setting storage-provisioner=true in profile "no-preload-232338"
	I1216 21:04:58.635958   60421 config.go:182] Loaded profile config "no-preload-232338": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 21:04:58.635966   60421 addons.go:234] Setting addon storage-provisioner=true in "no-preload-232338"
	I1216 21:04:58.635969   60421 addons.go:69] Setting default-storageclass=true in profile "no-preload-232338"
	W1216 21:04:58.635975   60421 addons.go:243] addon storage-provisioner should already be in state true
	I1216 21:04:58.635986   60421 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-232338"
	I1216 21:04:58.636005   60421 host.go:66] Checking if "no-preload-232338" exists ...
	I1216 21:04:58.635997   60421 addons.go:69] Setting metrics-server=true in profile "no-preload-232338"
	I1216 21:04:58.636029   60421 addons.go:234] Setting addon metrics-server=true in "no-preload-232338"
	W1216 21:04:58.636038   60421 addons.go:243] addon metrics-server should already be in state true
	I1216 21:04:58.636069   60421 host.go:66] Checking if "no-preload-232338" exists ...
	I1216 21:04:58.636428   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:58.636460   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:58.636428   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:58.636513   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:58.636532   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:58.636549   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:58.637558   60421 out.go:177] * Verifying Kubernetes components...
	I1216 21:04:58.639254   60421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 21:04:58.652770   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43305
	I1216 21:04:58.652789   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35093
	I1216 21:04:58.653247   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:58.653368   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:58.653818   60421 main.go:141] libmachine: Using API Version  1
	I1216 21:04:58.653836   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:58.653944   60421 main.go:141] libmachine: Using API Version  1
	I1216 21:04:58.653963   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:58.654562   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:58.654565   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:58.654775   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetState
	I1216 21:04:58.655078   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:58.655117   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:58.656383   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38087
	I1216 21:04:58.656987   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:58.657520   60421 main.go:141] libmachine: Using API Version  1
	I1216 21:04:58.657553   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:58.657933   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:58.658517   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:58.658566   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:58.658692   60421 addons.go:234] Setting addon default-storageclass=true in "no-preload-232338"
	W1216 21:04:58.658708   60421 addons.go:243] addon default-storageclass should already be in state true
	I1216 21:04:58.658737   60421 host.go:66] Checking if "no-preload-232338" exists ...
	I1216 21:04:58.659001   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:58.659043   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:58.672942   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34153
	I1216 21:04:58.673478   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:58.674034   60421 main.go:141] libmachine: Using API Version  1
	I1216 21:04:58.674063   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:58.674421   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:58.674594   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37541
	I1216 21:04:58.674614   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetState
	I1216 21:04:58.674994   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:58.675686   60421 main.go:141] libmachine: Using API Version  1
	I1216 21:04:58.675699   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:58.676334   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:58.676480   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 21:04:58.676898   60421 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:04:58.676931   60421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:04:58.679230   60421 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1216 21:04:58.680032   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33309
	I1216 21:04:58.680609   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:58.680754   60421 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1216 21:04:58.680772   60421 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1216 21:04:58.680794   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 21:04:58.681202   60421 main.go:141] libmachine: Using API Version  1
	I1216 21:04:58.681221   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:58.681610   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:58.681815   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetState
	I1216 21:04:58.683608   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 21:04:58.684266   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 21:04:58.684765   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 21:04:58.684793   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 21:04:58.684925   60421 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 21:04:56.783069   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:04:56.783323   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:04:58.684932   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 21:04:58.685156   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 21:04:58.685321   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 21:04:58.685515   60421 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 21:04:58.686360   60421 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 21:04:58.686379   60421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 21:04:58.686396   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 21:04:58.689909   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 21:04:58.690365   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 21:04:58.690392   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 21:04:58.690698   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 21:04:58.690927   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 21:04:58.691095   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 21:04:58.691305   60421 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 21:04:58.695899   60421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36017
	I1216 21:04:58.696274   60421 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:04:58.696758   60421 main.go:141] libmachine: Using API Version  1
	I1216 21:04:58.696777   60421 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:04:58.697064   60421 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:04:58.697225   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetState
	I1216 21:04:58.698530   60421 main.go:141] libmachine: (no-preload-232338) Calling .DriverName
	I1216 21:04:58.698751   60421 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 21:04:58.698766   60421 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 21:04:58.698784   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHHostname
	I1216 21:04:58.701986   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 21:04:58.702420   60421 main.go:141] libmachine: (no-preload-232338) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:00:29", ip: ""} in network mk-no-preload-232338: {Iface:virbr2 ExpiryTime:2024-12-16 21:50:30 +0000 UTC Type:0 Mac:52:54:00:07:00:29 Iaid: IPaddr:192.168.50.240 Prefix:24 Hostname:no-preload-232338 Clientid:01:52:54:00:07:00:29}
	I1216 21:04:58.702473   60421 main.go:141] libmachine: (no-preload-232338) DBG | domain no-preload-232338 has defined IP address 192.168.50.240 and MAC address 52:54:00:07:00:29 in network mk-no-preload-232338
	I1216 21:04:58.702655   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHPort
	I1216 21:04:58.702839   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHKeyPath
	I1216 21:04:58.702979   60421 main.go:141] libmachine: (no-preload-232338) Calling .GetSSHUsername
	I1216 21:04:58.703197   60421 sshutil.go:53] new ssh client: &{IP:192.168.50.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/no-preload-232338/id_rsa Username:docker}
	I1216 21:04:58.866115   60421 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 21:04:58.892287   60421 node_ready.go:35] waiting up to 6m0s for node "no-preload-232338" to be "Ready" ...
	I1216 21:04:58.949580   60421 node_ready.go:49] node "no-preload-232338" has status "Ready":"True"
	I1216 21:04:58.949610   60421 node_ready.go:38] duration metric: took 57.274849ms for node "no-preload-232338" to be "Ready" ...
	I1216 21:04:58.949622   60421 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:04:58.983955   60421 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-4wwvd" in "kube-system" namespace to be "Ready" ...
	I1216 21:04:59.036124   60421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 21:04:59.039113   60421 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1216 21:04:59.039139   60421 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1216 21:04:59.087493   60421 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1216 21:04:59.087531   60421 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1216 21:04:59.094976   60421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 21:04:59.129816   60421 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 21:04:59.129840   60421 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1216 21:04:59.236390   60421 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 21:05:00.157688   60421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.121522553s)
	I1216 21:05:00.157736   60421 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:00.157751   60421 main.go:141] libmachine: (no-preload-232338) Calling .Close
	I1216 21:05:00.157764   60421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.06274536s)
	I1216 21:05:00.157830   60421 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:00.157851   60421 main.go:141] libmachine: (no-preload-232338) Calling .Close
	I1216 21:05:00.158259   60421 main.go:141] libmachine: (no-preload-232338) DBG | Closing plugin on server side
	I1216 21:05:00.158270   60421 main.go:141] libmachine: (no-preload-232338) DBG | Closing plugin on server side
	I1216 21:05:00.158282   60421 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:00.158288   60421 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:00.158297   60421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:00.158314   60421 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:00.158327   60421 main.go:141] libmachine: (no-preload-232338) Calling .Close
	I1216 21:05:00.158319   60421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:00.158344   60421 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:00.158352   60421 main.go:141] libmachine: (no-preload-232338) Calling .Close
	I1216 21:05:00.158589   60421 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:00.158604   60421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:00.158624   60421 main.go:141] libmachine: (no-preload-232338) DBG | Closing plugin on server side
	I1216 21:05:00.158589   60421 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:00.158655   60421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:00.182819   60421 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:00.182844   60421 main.go:141] libmachine: (no-preload-232338) Calling .Close
	I1216 21:05:00.183229   60421 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:00.183271   60421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:00.679810   60421 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.44337328s)
	I1216 21:05:00.679867   60421 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:00.679880   60421 main.go:141] libmachine: (no-preload-232338) Calling .Close
	I1216 21:05:00.680233   60421 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:00.680254   60421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:00.680266   60421 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:00.680274   60421 main.go:141] libmachine: (no-preload-232338) Calling .Close
	I1216 21:05:00.680612   60421 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:00.680632   60421 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:00.680643   60421 addons.go:475] Verifying addon metrics-server=true in "no-preload-232338"
	I1216 21:05:00.680659   60421 main.go:141] libmachine: (no-preload-232338) DBG | Closing plugin on server side
	I1216 21:05:00.682400   60421 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1216 21:05:00.684226   60421 addons.go:510] duration metric: took 2.048395371s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1216 21:05:00.997599   60421 pod_ready.go:103] pod "coredns-668d6bf9bc-4wwvd" in "kube-system" namespace has status "Ready":"False"
	I1216 21:05:01.990706   60421 pod_ready.go:93] pod "coredns-668d6bf9bc-4wwvd" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:01.990733   60421 pod_ready.go:82] duration metric: took 3.006750411s for pod "coredns-668d6bf9bc-4wwvd" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:01.990742   60421 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-c4qfj" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:03.998055   60421 pod_ready.go:103] pod "coredns-668d6bf9bc-c4qfj" in "kube-system" namespace has status "Ready":"False"
	I1216 21:05:05.997310   60421 pod_ready.go:93] pod "coredns-668d6bf9bc-c4qfj" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:05.997334   60421 pod_ready.go:82] duration metric: took 4.006586503s for pod "coredns-668d6bf9bc-c4qfj" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:05.997346   60421 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.002576   60421 pod_ready.go:93] pod "etcd-no-preload-232338" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:06.002597   60421 pod_ready.go:82] duration metric: took 5.244238ms for pod "etcd-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.002607   60421 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.007407   60421 pod_ready.go:93] pod "kube-apiserver-no-preload-232338" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:06.007435   60421 pod_ready.go:82] duration metric: took 4.820838ms for pod "kube-apiserver-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.007449   60421 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.012239   60421 pod_ready.go:93] pod "kube-controller-manager-no-preload-232338" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:06.012263   60421 pod_ready.go:82] duration metric: took 4.806874ms for pod "kube-controller-manager-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.012273   60421 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-m5hq8" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.017087   60421 pod_ready.go:93] pod "kube-proxy-m5hq8" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:06.017111   60421 pod_ready.go:82] duration metric: took 4.830348ms for pod "kube-proxy-m5hq8" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.017124   60421 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.393947   60421 pod_ready.go:93] pod "kube-scheduler-no-preload-232338" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:06.393978   60421 pod_ready.go:82] duration metric: took 376.845934ms for pod "kube-scheduler-no-preload-232338" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:06.393989   60421 pod_ready.go:39] duration metric: took 7.444356073s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:05:06.394008   60421 api_server.go:52] waiting for apiserver process to appear ...
	I1216 21:05:06.394074   60421 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:05:06.410287   60421 api_server.go:72] duration metric: took 7.774519412s to wait for apiserver process to appear ...
	I1216 21:05:06.410327   60421 api_server.go:88] waiting for apiserver healthz status ...
	I1216 21:05:06.410363   60421 api_server.go:253] Checking apiserver healthz at https://192.168.50.240:8443/healthz ...
	I1216 21:05:06.415344   60421 api_server.go:279] https://192.168.50.240:8443/healthz returned 200:
	ok
	I1216 21:05:06.416302   60421 api_server.go:141] control plane version: v1.32.0
	I1216 21:05:06.416324   60421 api_server.go:131] duration metric: took 5.989768ms to wait for apiserver health ...
	I1216 21:05:06.416333   60421 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 21:05:06.598174   60421 system_pods.go:59] 9 kube-system pods found
	I1216 21:05:06.598205   60421 system_pods.go:61] "coredns-668d6bf9bc-4wwvd" [1c63ab10-dfdd-4aca-b39f-bc9b0e028e5e] Running
	I1216 21:05:06.598210   60421 system_pods.go:61] "coredns-668d6bf9bc-c4qfj" [b9bf3125-1e6d-4794-a2e6-2ff7ed5132b1] Running
	I1216 21:05:06.598214   60421 system_pods.go:61] "etcd-no-preload-232338" [5318f756-4c64-46be-b71b-94d53f48f0e9] Running
	I1216 21:05:06.598218   60421 system_pods.go:61] "kube-apiserver-no-preload-232338" [8d8fa68c-80ab-4747-a2ce-eeaff8847c29] Running
	I1216 21:05:06.598222   60421 system_pods.go:61] "kube-controller-manager-no-preload-232338" [8626806c-cd3f-488c-95c3-4b909878c1e4] Running
	I1216 21:05:06.598224   60421 system_pods.go:61] "kube-proxy-m5hq8" [ca0d357a-dda2-4508-a954-5c67eaf5b8ac] Running
	I1216 21:05:06.598229   60421 system_pods.go:61] "kube-scheduler-no-preload-232338" [8944107e-9e5c-474b-a0c1-9461e797a131] Running
	I1216 21:05:06.598236   60421 system_pods.go:61] "metrics-server-f79f97bbb-l7dcr" [fabafb40-1cb8-427b-88a6-37eeb6fd5b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:05:06.598240   60421 system_pods.go:61] "storage-provisioner" [3b742666-dfd4-4c9b-95a9-25367ec2a718] Running
	I1216 21:05:06.598248   60421 system_pods.go:74] duration metric: took 181.908567ms to wait for pod list to return data ...
	I1216 21:05:06.598255   60421 default_sa.go:34] waiting for default service account to be created ...
	I1216 21:05:06.794774   60421 default_sa.go:45] found service account: "default"
	I1216 21:05:06.794805   60421 default_sa.go:55] duration metric: took 196.542698ms for default service account to be created ...
	I1216 21:05:06.794823   60421 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 21:05:06.998297   60421 system_pods.go:86] 9 kube-system pods found
	I1216 21:05:06.998332   60421 system_pods.go:89] "coredns-668d6bf9bc-4wwvd" [1c63ab10-dfdd-4aca-b39f-bc9b0e028e5e] Running
	I1216 21:05:06.998341   60421 system_pods.go:89] "coredns-668d6bf9bc-c4qfj" [b9bf3125-1e6d-4794-a2e6-2ff7ed5132b1] Running
	I1216 21:05:06.998348   60421 system_pods.go:89] "etcd-no-preload-232338" [5318f756-4c64-46be-b71b-94d53f48f0e9] Running
	I1216 21:05:06.998354   60421 system_pods.go:89] "kube-apiserver-no-preload-232338" [8d8fa68c-80ab-4747-a2ce-eeaff8847c29] Running
	I1216 21:05:06.998359   60421 system_pods.go:89] "kube-controller-manager-no-preload-232338" [8626806c-cd3f-488c-95c3-4b909878c1e4] Running
	I1216 21:05:06.998364   60421 system_pods.go:89] "kube-proxy-m5hq8" [ca0d357a-dda2-4508-a954-5c67eaf5b8ac] Running
	I1216 21:05:06.998369   60421 system_pods.go:89] "kube-scheduler-no-preload-232338" [8944107e-9e5c-474b-a0c1-9461e797a131] Running
	I1216 21:05:06.998378   60421 system_pods.go:89] "metrics-server-f79f97bbb-l7dcr" [fabafb40-1cb8-427b-88a6-37eeb6fd5b77] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:05:06.998385   60421 system_pods.go:89] "storage-provisioner" [3b742666-dfd4-4c9b-95a9-25367ec2a718] Running
	I1216 21:05:06.998397   60421 system_pods.go:126] duration metric: took 203.564807ms to wait for k8s-apps to be running ...
	I1216 21:05:06.998407   60421 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 21:05:06.998457   60421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:05:07.014979   60421 system_svc.go:56] duration metric: took 16.561363ms WaitForService to wait for kubelet
	I1216 21:05:07.015013   60421 kubeadm.go:582] duration metric: took 8.379260538s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 21:05:07.015029   60421 node_conditions.go:102] verifying NodePressure condition ...
	I1216 21:05:07.195470   60421 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 21:05:07.195504   60421 node_conditions.go:123] node cpu capacity is 2
	I1216 21:05:07.195516   60421 node_conditions.go:105] duration metric: took 180.480949ms to run NodePressure ...
	I1216 21:05:07.195530   60421 start.go:241] waiting for startup goroutines ...
	I1216 21:05:07.195541   60421 start.go:246] waiting for cluster config update ...
	I1216 21:05:07.195554   60421 start.go:255] writing updated cluster config ...
	I1216 21:05:07.195857   60421 ssh_runner.go:195] Run: rm -f paused
	I1216 21:05:07.244442   60421 start.go:600] kubectl: 1.32.0, cluster: 1.32.0 (minor skew: 0)
	I1216 21:05:07.246788   60421 out.go:177] * Done! kubectl is now configured to use "no-preload-232338" cluster and "default" namespace by default
	I1216 21:05:06.784032   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:05:06.784224   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:05:13.066274   60215 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (27.635155592s)
	I1216 21:05:13.066379   60215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:05:13.096145   60215 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 21:05:13.109211   60215 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:05:13.125828   60215 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:05:13.125859   60215 kubeadm.go:157] found existing configuration files:
	
	I1216 21:05:13.125914   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 21:05:13.146982   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:05:13.147053   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:05:13.159382   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 21:05:13.176492   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:05:13.176572   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:05:13.190933   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 21:05:13.213230   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:05:13.213301   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:05:13.224631   60215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 21:05:13.234914   60215 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:05:13.234975   60215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:05:13.245513   60215 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 21:05:13.300399   60215 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
	I1216 21:05:13.300491   60215 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 21:05:13.424114   60215 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 21:05:13.424252   60215 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 21:05:13.424372   60215 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 21:05:13.434507   60215 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 21:05:13.436710   60215 out.go:235]   - Generating certificates and keys ...
	I1216 21:05:13.436825   60215 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 21:05:13.436985   60215 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 21:05:13.437127   60215 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 21:05:13.437215   60215 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 21:05:13.437317   60215 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 21:05:13.437404   60215 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 21:05:13.437822   60215 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 21:05:13.438183   60215 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 21:05:13.438724   60215 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 21:05:13.439096   60215 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 21:05:13.439334   60215 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 21:05:13.439399   60215 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 21:05:13.528853   60215 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 21:05:13.700795   60215 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 21:05:13.890142   60215 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 21:05:14.166151   60215 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 21:05:14.310513   60215 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 21:05:14.311121   60215 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 21:05:14.317114   60215 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 21:05:14.319080   60215 out.go:235]   - Booting up control plane ...
	I1216 21:05:14.319218   60215 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 21:05:14.319332   60215 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 21:05:14.319518   60215 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 21:05:14.340394   60215 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 21:05:14.348443   60215 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 21:05:14.348533   60215 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 21:05:14.493244   60215 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 21:05:14.493456   60215 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 21:05:14.995210   60215 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.042805ms
	I1216 21:05:14.995325   60215 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1216 21:05:20.000911   60215 kubeadm.go:310] [api-check] The API server is healthy after 5.002773967s
	I1216 21:05:20.019851   60215 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 21:05:20.037375   60215 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 21:05:20.074003   60215 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 21:05:20.074237   60215 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-606219 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 21:05:20.087136   60215 kubeadm.go:310] [bootstrap-token] Using token: wev02f.lvhctqt9pq1agi1c
	I1216 21:05:20.088742   60215 out.go:235]   - Configuring RBAC rules ...
	I1216 21:05:20.088893   60215 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 21:05:20.094114   60215 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 21:05:20.101979   60215 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 21:05:20.105419   60215 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 21:05:20.112443   60215 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 21:05:20.116045   60215 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 21:05:20.406790   60215 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 21:05:20.844101   60215 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1216 21:05:21.414298   60215 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1216 21:05:21.414397   60215 kubeadm.go:310] 
	I1216 21:05:21.414488   60215 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1216 21:05:21.414504   60215 kubeadm.go:310] 
	I1216 21:05:21.414636   60215 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1216 21:05:21.414655   60215 kubeadm.go:310] 
	I1216 21:05:21.414694   60215 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1216 21:05:21.414796   60215 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 21:05:21.414866   60215 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 21:05:21.414877   60215 kubeadm.go:310] 
	I1216 21:05:21.414978   60215 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1216 21:05:21.415004   60215 kubeadm.go:310] 
	I1216 21:05:21.415071   60215 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 21:05:21.415080   60215 kubeadm.go:310] 
	I1216 21:05:21.415147   60215 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1216 21:05:21.415314   60215 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 21:05:21.415424   60215 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 21:05:21.415444   60215 kubeadm.go:310] 
	I1216 21:05:21.415568   60215 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 21:05:21.415674   60215 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1216 21:05:21.415690   60215 kubeadm.go:310] 
	I1216 21:05:21.415837   60215 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wev02f.lvhctqt9pq1agi1c \
	I1216 21:05:21.415982   60215 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e03b60b144334bf383a3d22daeca854a6b4004373f1847ba3afcb85a998b5735 \
	I1216 21:05:21.416023   60215 kubeadm.go:310] 	--control-plane 
	I1216 21:05:21.416033   60215 kubeadm.go:310] 
	I1216 21:05:21.416152   60215 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1216 21:05:21.416165   60215 kubeadm.go:310] 
	I1216 21:05:21.416295   60215 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wev02f.lvhctqt9pq1agi1c \
	I1216 21:05:21.416452   60215 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e03b60b144334bf383a3d22daeca854a6b4004373f1847ba3afcb85a998b5735 
	I1216 21:05:21.417157   60215 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 21:05:21.417251   60215 cni.go:84] Creating CNI manager for ""
	I1216 21:05:21.417265   60215 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 21:05:21.418899   60215 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 21:05:21.420240   60215 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 21:05:21.438639   60215 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 21:05:21.470443   60215 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 21:05:21.470525   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:21.470552   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-606219 minikube.k8s.io/updated_at=2024_12_16T21_05_21_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=74e51ab701402ddc00f8ba70f2a2775c7dcd6477 minikube.k8s.io/name=embed-certs-606219 minikube.k8s.io/primary=true
	I1216 21:05:21.721162   60215 ops.go:34] apiserver oom_adj: -16
	I1216 21:05:21.721292   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:22.221634   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:22.722431   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:23.221436   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:23.721948   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:24.222009   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:24.722203   60215 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 21:05:24.835684   60215 kubeadm.go:1113] duration metric: took 3.36522517s to wait for elevateKubeSystemPrivileges
	I1216 21:05:24.835729   60215 kubeadm.go:394] duration metric: took 5m0.316036708s to StartCluster
	I1216 21:05:24.835751   60215 settings.go:142] acquiring lock: {Name:mke62e1d1fa6bfae09410847a3fc6f95d0bbbd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:05:24.835847   60215 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 21:05:24.838279   60215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/kubeconfig: {Name:mk67073c6dc9abd712825d4490d6430745897f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 21:05:24.838580   60215 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.151 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 21:05:24.838625   60215 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 21:05:24.838747   60215 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-606219"
	I1216 21:05:24.838768   60215 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-606219"
	W1216 21:05:24.838789   60215 addons.go:243] addon storage-provisioner should already be in state true
	I1216 21:05:24.838816   60215 config.go:182] Loaded profile config "embed-certs-606219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 21:05:24.838825   60215 addons.go:69] Setting default-storageclass=true in profile "embed-certs-606219"
	I1216 21:05:24.838832   60215 addons.go:69] Setting metrics-server=true in profile "embed-certs-606219"
	I1216 21:05:24.838846   60215 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-606219"
	I1216 21:05:24.838822   60215 host.go:66] Checking if "embed-certs-606219" exists ...
	I1216 21:05:24.838848   60215 addons.go:234] Setting addon metrics-server=true in "embed-certs-606219"
	W1216 21:05:24.838945   60215 addons.go:243] addon metrics-server should already be in state true
	I1216 21:05:24.838965   60215 host.go:66] Checking if "embed-certs-606219" exists ...
	I1216 21:05:24.839285   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:05:24.839292   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:05:24.839331   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:05:24.839364   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:05:24.839415   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:05:24.839496   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:05:24.843833   60215 out.go:177] * Verifying Kubernetes components...
	I1216 21:05:24.845341   60215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 21:05:24.857648   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39513
	I1216 21:05:24.858457   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:05:24.859021   60215 main.go:141] libmachine: Using API Version  1
	I1216 21:05:24.859037   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:05:24.861356   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36663
	I1216 21:05:24.861406   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44685
	I1216 21:05:24.861357   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:05:24.861844   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:05:24.862150   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:05:24.862188   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:05:24.862315   60215 main.go:141] libmachine: Using API Version  1
	I1216 21:05:24.862334   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:05:24.862334   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:05:24.862661   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:05:24.862876   60215 main.go:141] libmachine: Using API Version  1
	I1216 21:05:24.862894   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:05:24.863171   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:05:24.863200   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:05:24.863634   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:05:24.863964   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetState
	I1216 21:05:24.867371   60215 addons.go:234] Setting addon default-storageclass=true in "embed-certs-606219"
	W1216 21:05:24.867392   60215 addons.go:243] addon default-storageclass should already be in state true
	I1216 21:05:24.867419   60215 host.go:66] Checking if "embed-certs-606219" exists ...
	I1216 21:05:24.867758   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:05:24.867801   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:05:24.884243   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35999
	I1216 21:05:24.884680   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:05:24.885282   60215 main.go:141] libmachine: Using API Version  1
	I1216 21:05:24.885304   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:05:24.885380   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36799
	I1216 21:05:24.885657   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:05:24.885730   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:05:24.885934   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetState
	I1216 21:05:24.886191   60215 main.go:141] libmachine: Using API Version  1
	I1216 21:05:24.886202   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:05:24.886473   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:05:24.886831   60215 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/20091-7083/.minikube/bin/docker-machine-driver-kvm2
	I1216 21:05:24.886853   60215 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 21:05:24.887935   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:05:24.890092   60215 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1216 21:05:24.891395   60215 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1216 21:05:24.891413   60215 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1216 21:05:24.891441   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:05:24.894367   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46739
	I1216 21:05:24.894926   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:05:24.895551   60215 main.go:141] libmachine: Using API Version  1
	I1216 21:05:24.895570   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:05:24.895832   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:05:24.896148   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:05:24.896382   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetState
	I1216 21:05:24.896501   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:05:24.896523   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:05:24.897136   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:05:24.897327   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:05:24.897507   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:05:24.897673   60215 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 21:05:24.898101   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:05:24.900061   60215 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 21:05:24.901390   60215 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 21:05:24.901412   60215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 21:05:24.901432   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:05:24.904063   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:05:24.904403   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:05:24.904421   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:05:24.904617   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:05:24.904828   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:05:24.904969   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:05:24.905117   60215 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 21:05:24.907518   60215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32915
	I1216 21:05:24.907890   60215 main.go:141] libmachine: () Calling .GetVersion
	I1216 21:05:24.908349   60215 main.go:141] libmachine: Using API Version  1
	I1216 21:05:24.908362   60215 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 21:05:24.908615   60215 main.go:141] libmachine: () Calling .GetMachineName
	I1216 21:05:24.908793   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetState
	I1216 21:05:24.910349   60215 main.go:141] libmachine: (embed-certs-606219) Calling .DriverName
	I1216 21:05:24.910557   60215 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 21:05:24.910590   60215 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 21:05:24.910623   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHHostname
	I1216 21:05:24.913163   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:05:24.913546   60215 main.go:141] libmachine: (embed-certs-606219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:37:8f", ip: ""} in network mk-embed-certs-606219: {Iface:virbr3 ExpiryTime:2024-12-16 22:00:10 +0000 UTC Type:0 Mac:52:54:00:63:37:8f Iaid: IPaddr:192.168.61.151 Prefix:24 Hostname:embed-certs-606219 Clientid:01:52:54:00:63:37:8f}
	I1216 21:05:24.913628   60215 main.go:141] libmachine: (embed-certs-606219) DBG | domain embed-certs-606219 has defined IP address 192.168.61.151 and MAC address 52:54:00:63:37:8f in network mk-embed-certs-606219
	I1216 21:05:24.913971   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHPort
	I1216 21:05:24.914156   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHKeyPath
	I1216 21:05:24.914402   60215 main.go:141] libmachine: (embed-certs-606219) Calling .GetSSHUsername
	I1216 21:05:24.914562   60215 sshutil.go:53] new ssh client: &{IP:192.168.61.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/embed-certs-606219/id_rsa Username:docker}
	I1216 21:05:25.054773   60215 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 21:05:25.077692   60215 node_ready.go:35] waiting up to 6m0s for node "embed-certs-606219" to be "Ready" ...
	I1216 21:05:25.085592   60215 node_ready.go:49] node "embed-certs-606219" has status "Ready":"True"
	I1216 21:05:25.085618   60215 node_ready.go:38] duration metric: took 7.893359ms for node "embed-certs-606219" to be "Ready" ...
	I1216 21:05:25.085630   60215 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:05:25.092073   60215 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:25.160890   60215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 21:05:25.171950   60215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 21:05:25.174517   60215 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1216 21:05:25.174540   60215 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1216 21:05:25.201386   60215 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1216 21:05:25.201415   60215 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1216 21:05:25.279568   60215 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 21:05:25.279599   60215 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1216 21:05:25.316528   60215 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 21:05:25.944495   60215 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:25.944521   60215 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:25.944529   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Close
	I1216 21:05:25.944533   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Close
	I1216 21:05:25.944816   60215 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:25.944835   60215 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:25.944845   60215 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:25.944855   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Close
	I1216 21:05:25.944855   60215 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:25.944869   60215 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:25.944876   60215 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:25.944888   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Close
	I1216 21:05:25.944817   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Closing plugin on server side
	I1216 21:05:25.945069   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Closing plugin on server side
	I1216 21:05:25.945131   60215 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:25.945147   60215 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:25.945168   60215 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:25.945173   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Closing plugin on server side
	I1216 21:05:25.945218   60215 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:25.961427   60215 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:25.961449   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Close
	I1216 21:05:25.961729   60215 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:25.961743   60215 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:26.745600   60215 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.429029698s)
	I1216 21:05:26.745665   60215 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:26.745678   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Close
	I1216 21:05:26.746097   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Closing plugin on server side
	I1216 21:05:26.746115   60215 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:26.746128   60215 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:26.746142   60215 main.go:141] libmachine: Making call to close driver server
	I1216 21:05:26.746151   60215 main.go:141] libmachine: (embed-certs-606219) Calling .Close
	I1216 21:05:26.746429   60215 main.go:141] libmachine: Successfully made call to close driver server
	I1216 21:05:26.746446   60215 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 21:05:26.746457   60215 addons.go:475] Verifying addon metrics-server=true in "embed-certs-606219"
	I1216 21:05:26.746480   60215 main.go:141] libmachine: (embed-certs-606219) DBG | Closing plugin on server side
	I1216 21:05:26.748859   60215 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1216 21:05:26.785021   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:05:26.785309   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:05:26.750502   60215 addons.go:510] duration metric: took 1.911885721s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1216 21:05:27.124629   60215 pod_ready.go:103] pod "etcd-embed-certs-606219" in "kube-system" namespace has status "Ready":"False"
	I1216 21:05:28.100607   60215 pod_ready.go:93] pod "etcd-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:28.100642   60215 pod_ready.go:82] duration metric: took 3.008540123s for pod "etcd-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:28.100654   60215 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:28.107620   60215 pod_ready.go:93] pod "kube-apiserver-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:28.107649   60215 pod_ready.go:82] duration metric: took 6.986126ms for pod "kube-apiserver-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:28.107661   60215 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:30.114012   60215 pod_ready.go:103] pod "kube-controller-manager-embed-certs-606219" in "kube-system" namespace has status "Ready":"False"
	I1216 21:05:31.116704   60215 pod_ready.go:93] pod "kube-controller-manager-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:31.116738   60215 pod_ready.go:82] duration metric: took 3.009069732s for pod "kube-controller-manager-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:31.116752   60215 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:31.122043   60215 pod_ready.go:93] pod "kube-scheduler-embed-certs-606219" in "kube-system" namespace has status "Ready":"True"
	I1216 21:05:31.122079   60215 pod_ready.go:82] duration metric: took 5.318248ms for pod "kube-scheduler-embed-certs-606219" in "kube-system" namespace to be "Ready" ...
	I1216 21:05:31.122089   60215 pod_ready.go:39] duration metric: took 6.036446164s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 21:05:31.122107   60215 api_server.go:52] waiting for apiserver process to appear ...
	I1216 21:05:31.122167   60215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 21:05:31.140854   60215 api_server.go:72] duration metric: took 6.302233923s to wait for apiserver process to appear ...
	I1216 21:05:31.140887   60215 api_server.go:88] waiting for apiserver healthz status ...
	I1216 21:05:31.140910   60215 api_server.go:253] Checking apiserver healthz at https://192.168.61.151:8443/healthz ...
	I1216 21:05:31.146080   60215 api_server.go:279] https://192.168.61.151:8443/healthz returned 200:
	ok
	I1216 21:05:31.147076   60215 api_server.go:141] control plane version: v1.32.0
	I1216 21:05:31.147107   60215 api_server.go:131] duration metric: took 6.2056ms to wait for apiserver health ...
	I1216 21:05:31.147115   60215 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 21:05:31.152598   60215 system_pods.go:59] 9 kube-system pods found
	I1216 21:05:31.152627   60215 system_pods.go:61] "coredns-668d6bf9bc-5c74p" [ef8e73b6-150f-47cc-9df9-dcf983e5bd6e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:05:31.152634   60215 system_pods.go:61] "coredns-668d6bf9bc-xhdlz" [c1b5b585-f005-4885-9809-60f60e03bf04] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:05:31.152640   60215 system_pods.go:61] "etcd-embed-certs-606219" [f5595ee4-23f3-4227-8e25-8679fd2dc722] Running
	I1216 21:05:31.152643   60215 system_pods.go:61] "kube-apiserver-embed-certs-606219" [be11ba17-ecee-47c1-a4bd-329e0e705369] Running
	I1216 21:05:31.152647   60215 system_pods.go:61] "kube-controller-manager-embed-certs-606219" [21210597-d4d5-4cab-9a24-2d9f702f682d] Running
	I1216 21:05:31.152652   60215 system_pods.go:61] "kube-proxy-677x9" [37810520-4f02-46c4-8eeb-6dc70c859e3e] Running
	I1216 21:05:31.152655   60215 system_pods.go:61] "kube-scheduler-embed-certs-606219" [5a39f42d-b727-4acd-bd39-ae1c56a5b725] Running
	I1216 21:05:31.152659   60215 system_pods.go:61] "metrics-server-f79f97bbb-6fxnl" [828f2925-402c-4f49-89e1-354e082c0de4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:05:31.152662   60215 system_pods.go:61] "storage-provisioner" [6437bd61-690b-498d-b35c-e2ef4eb5be97] Running
	I1216 21:05:31.152669   60215 system_pods.go:74] duration metric: took 5.548798ms to wait for pod list to return data ...
	I1216 21:05:31.152675   60215 default_sa.go:34] waiting for default service account to be created ...
	I1216 21:05:31.155444   60215 default_sa.go:45] found service account: "default"
	I1216 21:05:31.155469   60215 default_sa.go:55] duration metric: took 2.788897ms for default service account to be created ...
	I1216 21:05:31.155477   60215 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 21:05:31.160520   60215 system_pods.go:86] 9 kube-system pods found
	I1216 21:05:31.160548   60215 system_pods.go:89] "coredns-668d6bf9bc-5c74p" [ef8e73b6-150f-47cc-9df9-dcf983e5bd6e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:05:31.160555   60215 system_pods.go:89] "coredns-668d6bf9bc-xhdlz" [c1b5b585-f005-4885-9809-60f60e03bf04] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 21:05:31.160561   60215 system_pods.go:89] "etcd-embed-certs-606219" [f5595ee4-23f3-4227-8e25-8679fd2dc722] Running
	I1216 21:05:31.160565   60215 system_pods.go:89] "kube-apiserver-embed-certs-606219" [be11ba17-ecee-47c1-a4bd-329e0e705369] Running
	I1216 21:05:31.160569   60215 system_pods.go:89] "kube-controller-manager-embed-certs-606219" [21210597-d4d5-4cab-9a24-2d9f702f682d] Running
	I1216 21:05:31.160573   60215 system_pods.go:89] "kube-proxy-677x9" [37810520-4f02-46c4-8eeb-6dc70c859e3e] Running
	I1216 21:05:31.160576   60215 system_pods.go:89] "kube-scheduler-embed-certs-606219" [5a39f42d-b727-4acd-bd39-ae1c56a5b725] Running
	I1216 21:05:31.160580   60215 system_pods.go:89] "metrics-server-f79f97bbb-6fxnl" [828f2925-402c-4f49-89e1-354e082c0de4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 21:05:31.160584   60215 system_pods.go:89] "storage-provisioner" [6437bd61-690b-498d-b35c-e2ef4eb5be97] Running
	I1216 21:05:31.160591   60215 system_pods.go:126] duration metric: took 5.109359ms to wait for k8s-apps to be running ...
	I1216 21:05:31.160597   60215 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 21:05:31.160637   60215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:05:31.177182   60215 system_svc.go:56] duration metric: took 16.575484ms WaitForService to wait for kubelet
	I1216 21:05:31.177216   60215 kubeadm.go:582] duration metric: took 6.33860089s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 21:05:31.177239   60215 node_conditions.go:102] verifying NodePressure condition ...
	I1216 21:05:31.180614   60215 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 21:05:31.180635   60215 node_conditions.go:123] node cpu capacity is 2
	I1216 21:05:31.180645   60215 node_conditions.go:105] duration metric: took 3.400617ms to run NodePressure ...
	I1216 21:05:31.180656   60215 start.go:241] waiting for startup goroutines ...
	I1216 21:05:31.180667   60215 start.go:246] waiting for cluster config update ...
	I1216 21:05:31.180684   60215 start.go:255] writing updated cluster config ...
	I1216 21:05:31.180960   60215 ssh_runner.go:195] Run: rm -f paused
	I1216 21:05:31.232404   60215 start.go:600] kubectl: 1.32.0, cluster: 1.32.0 (minor skew: 0)
	I1216 21:05:31.234366   60215 out.go:177] * Done! kubectl is now configured to use "embed-certs-606219" cluster and "default" namespace by default
	I1216 21:06:06.787417   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:06:06.787673   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:06:06.787700   60933 kubeadm.go:310] 
	I1216 21:06:06.787779   60933 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1216 21:06:06.787849   60933 kubeadm.go:310] 		timed out waiting for the condition
	I1216 21:06:06.787864   60933 kubeadm.go:310] 
	I1216 21:06:06.787894   60933 kubeadm.go:310] 	This error is likely caused by:
	I1216 21:06:06.787944   60933 kubeadm.go:310] 		- The kubelet is not running
	I1216 21:06:06.788115   60933 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 21:06:06.788131   60933 kubeadm.go:310] 
	I1216 21:06:06.788238   60933 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 21:06:06.788270   60933 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1216 21:06:06.788328   60933 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1216 21:06:06.788346   60933 kubeadm.go:310] 
	I1216 21:06:06.788492   60933 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1216 21:06:06.788568   60933 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1216 21:06:06.788575   60933 kubeadm.go:310] 
	I1216 21:06:06.788706   60933 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1216 21:06:06.788914   60933 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1216 21:06:06.789052   60933 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1216 21:06:06.789150   60933 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1216 21:06:06.789160   60933 kubeadm.go:310] 
	I1216 21:06:06.789970   60933 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 21:06:06.790084   60933 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1216 21:06:06.790222   60933 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1216 21:06:06.790376   60933 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 21:06:06.790430   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 21:06:07.272336   60933 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 21:06:07.288881   60933 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 21:06:07.303411   60933 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 21:06:07.303437   60933 kubeadm.go:157] found existing configuration files:
	
	I1216 21:06:07.303486   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 21:06:07.314605   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 21:06:07.314675   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 21:06:07.326523   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 21:06:07.336506   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 21:06:07.336587   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 21:06:07.347505   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 21:06:07.357743   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 21:06:07.357799   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 21:06:07.368251   60933 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 21:06:07.378296   60933 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 21:06:07.378366   60933 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 21:06:07.390625   60933 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 21:06:07.461800   60933 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1216 21:06:07.461911   60933 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 21:06:07.607467   60933 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 21:06:07.607664   60933 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 21:06:07.607821   60933 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1216 21:06:07.821429   60933 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 21:06:07.823617   60933 out.go:235]   - Generating certificates and keys ...
	I1216 21:06:07.823728   60933 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 21:06:07.823826   60933 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 21:06:07.823970   60933 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 21:06:07.824066   60933 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 21:06:07.824191   60933 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 21:06:07.824281   60933 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 21:06:07.824374   60933 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 21:06:07.824452   60933 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 21:06:07.824529   60933 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 21:06:07.824634   60933 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 21:06:07.824728   60933 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 21:06:07.824826   60933 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 21:06:08.070481   60933 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 21:06:08.416182   60933 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 21:06:08.472848   60933 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 21:06:08.528700   60933 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 21:06:08.551528   60933 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 21:06:08.552215   60933 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 21:06:08.552299   60933 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 21:06:08.702187   60933 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 21:06:08.704170   60933 out.go:235]   - Booting up control plane ...
	I1216 21:06:08.704286   60933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 21:06:08.721205   60933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 21:06:08.722619   60933 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 21:06:08.724289   60933 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 21:06:08.726457   60933 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1216 21:06:48.729045   60933 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1216 21:06:48.729713   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:06:48.730028   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:06:53.730648   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:06:53.730870   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:07:03.731670   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:07:03.731904   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:07:23.733276   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:07:23.733489   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:08:03.734439   60933 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 21:08:03.734730   60933 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 21:08:03.734768   60933 kubeadm.go:310] 
	I1216 21:08:03.734831   60933 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1216 21:08:03.734902   60933 kubeadm.go:310] 		timed out waiting for the condition
	I1216 21:08:03.734917   60933 kubeadm.go:310] 
	I1216 21:08:03.734966   60933 kubeadm.go:310] 	This error is likely caused by:
	I1216 21:08:03.735003   60933 kubeadm.go:310] 		- The kubelet is not running
	I1216 21:08:03.735094   60933 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 21:08:03.735104   60933 kubeadm.go:310] 
	I1216 21:08:03.735260   60933 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 21:08:03.735325   60933 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1216 21:08:03.735353   60933 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1216 21:08:03.735359   60933 kubeadm.go:310] 
	I1216 21:08:03.735486   60933 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1216 21:08:03.735604   60933 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1216 21:08:03.735614   60933 kubeadm.go:310] 
	I1216 21:08:03.735757   60933 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1216 21:08:03.735880   60933 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1216 21:08:03.735986   60933 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1216 21:08:03.736096   60933 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1216 21:08:03.736107   60933 kubeadm.go:310] 
	I1216 21:08:03.736944   60933 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 21:08:03.737145   60933 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1216 21:08:03.737211   60933 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1216 21:08:03.737287   60933 kubeadm.go:394] duration metric: took 7m57.891196073s to StartCluster
	I1216 21:08:03.737346   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 21:08:03.737417   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 21:08:03.789377   60933 cri.go:89] found id: ""
	I1216 21:08:03.789412   60933 logs.go:282] 0 containers: []
	W1216 21:08:03.789421   60933 logs.go:284] No container was found matching "kube-apiserver"
	I1216 21:08:03.789426   60933 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 21:08:03.789477   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 21:08:03.831122   60933 cri.go:89] found id: ""
	I1216 21:08:03.831150   60933 logs.go:282] 0 containers: []
	W1216 21:08:03.831161   60933 logs.go:284] No container was found matching "etcd"
	I1216 21:08:03.831167   60933 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 21:08:03.831236   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 21:08:03.870598   60933 cri.go:89] found id: ""
	I1216 21:08:03.870625   60933 logs.go:282] 0 containers: []
	W1216 21:08:03.870634   60933 logs.go:284] No container was found matching "coredns"
	I1216 21:08:03.870640   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 21:08:03.870695   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 21:08:03.909060   60933 cri.go:89] found id: ""
	I1216 21:08:03.909095   60933 logs.go:282] 0 containers: []
	W1216 21:08:03.909103   60933 logs.go:284] No container was found matching "kube-scheduler"
	I1216 21:08:03.909109   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 21:08:03.909163   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 21:08:03.946925   60933 cri.go:89] found id: ""
	I1216 21:08:03.946954   60933 logs.go:282] 0 containers: []
	W1216 21:08:03.946962   60933 logs.go:284] No container was found matching "kube-proxy"
	I1216 21:08:03.946968   60933 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 21:08:03.947038   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 21:08:03.985596   60933 cri.go:89] found id: ""
	I1216 21:08:03.985629   60933 logs.go:282] 0 containers: []
	W1216 21:08:03.985650   60933 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 21:08:03.985670   60933 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 21:08:03.985736   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 21:08:04.022504   60933 cri.go:89] found id: ""
	I1216 21:08:04.022530   60933 logs.go:282] 0 containers: []
	W1216 21:08:04.022538   60933 logs.go:284] No container was found matching "kindnet"
	I1216 21:08:04.022545   60933 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 21:08:04.022608   60933 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 21:08:04.075636   60933 cri.go:89] found id: ""
	I1216 21:08:04.075667   60933 logs.go:282] 0 containers: []
	W1216 21:08:04.075677   60933 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 21:08:04.075688   60933 logs.go:123] Gathering logs for describe nodes ...
	I1216 21:08:04.075707   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 21:08:04.180622   60933 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 21:08:04.180653   60933 logs.go:123] Gathering logs for CRI-O ...
	I1216 21:08:04.180671   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 21:08:04.308091   60933 logs.go:123] Gathering logs for container status ...
	I1216 21:08:04.308146   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 21:08:04.353240   60933 logs.go:123] Gathering logs for kubelet ...
	I1216 21:08:04.353294   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 21:08:04.407919   60933 logs.go:123] Gathering logs for dmesg ...
	I1216 21:08:04.407955   60933 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1216 21:08:04.423583   60933 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1216 21:08:04.423644   60933 out.go:270] * 
	W1216 21:08:04.423727   60933 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 21:08:04.423749   60933 out.go:270] * 
	W1216 21:08:04.424576   60933 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 21:08:04.428361   60933 out.go:201] 
	W1216 21:08:04.429839   60933 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 21:08:04.429919   60933 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 21:08:04.429958   60933 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 21:08:04.431619   60933 out.go:201] 
	
	
	==> CRI-O <==
	Dec 16 21:18:43 old-k8s-version-847766 crio[626]: time="2024-12-16 21:18:43.617772677Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383923617737854,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=78398848-55fc-4d51-8702-89adad0f9ee7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:18:43 old-k8s-version-847766 crio[626]: time="2024-12-16 21:18:43.618454977Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=58e2c71e-2fa1-4f60-8f39-f8a6ea11f6c3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:18:43 old-k8s-version-847766 crio[626]: time="2024-12-16 21:18:43.618541758Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=58e2c71e-2fa1-4f60-8f39-f8a6ea11f6c3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:18:43 old-k8s-version-847766 crio[626]: time="2024-12-16 21:18:43.618620478Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=58e2c71e-2fa1-4f60-8f39-f8a6ea11f6c3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:18:43 old-k8s-version-847766 crio[626]: time="2024-12-16 21:18:43.653335791Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=56112a84-7108-4d05-b7c5-960f80f9fca8 name=/runtime.v1.RuntimeService/Version
	Dec 16 21:18:43 old-k8s-version-847766 crio[626]: time="2024-12-16 21:18:43.653477622Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=56112a84-7108-4d05-b7c5-960f80f9fca8 name=/runtime.v1.RuntimeService/Version
	Dec 16 21:18:43 old-k8s-version-847766 crio[626]: time="2024-12-16 21:18:43.654912572Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9e25a1d2-9f8d-4267-ac0a-dca396254530 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:18:43 old-k8s-version-847766 crio[626]: time="2024-12-16 21:18:43.655315361Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383923655291943,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9e25a1d2-9f8d-4267-ac0a-dca396254530 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:18:43 old-k8s-version-847766 crio[626]: time="2024-12-16 21:18:43.655983989Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=28034926-f0bc-4dac-9a49-a90328e6c630 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:18:43 old-k8s-version-847766 crio[626]: time="2024-12-16 21:18:43.656059757Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=28034926-f0bc-4dac-9a49-a90328e6c630 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:18:43 old-k8s-version-847766 crio[626]: time="2024-12-16 21:18:43.656093574Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=28034926-f0bc-4dac-9a49-a90328e6c630 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:18:43 old-k8s-version-847766 crio[626]: time="2024-12-16 21:18:43.705204239Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b7e9438e-8ed0-4ace-98fd-e8e4b99e9675 name=/runtime.v1.RuntimeService/Version
	Dec 16 21:18:43 old-k8s-version-847766 crio[626]: time="2024-12-16 21:18:43.705305479Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b7e9438e-8ed0-4ace-98fd-e8e4b99e9675 name=/runtime.v1.RuntimeService/Version
	Dec 16 21:18:43 old-k8s-version-847766 crio[626]: time="2024-12-16 21:18:43.706629538Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9cc45d21-243f-4294-ae22-cbd0d00c28bb name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:18:43 old-k8s-version-847766 crio[626]: time="2024-12-16 21:18:43.707044029Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383923707020288,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9cc45d21-243f-4294-ae22-cbd0d00c28bb name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:18:43 old-k8s-version-847766 crio[626]: time="2024-12-16 21:18:43.707540833Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=87393f78-6b12-413c-8f4d-59f7f6067e0d name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:18:43 old-k8s-version-847766 crio[626]: time="2024-12-16 21:18:43.707656254Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=87393f78-6b12-413c-8f4d-59f7f6067e0d name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:18:43 old-k8s-version-847766 crio[626]: time="2024-12-16 21:18:43.707692986Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=87393f78-6b12-413c-8f4d-59f7f6067e0d name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:18:43 old-k8s-version-847766 crio[626]: time="2024-12-16 21:18:43.759623522Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=881324f4-95ee-48b9-98fe-629e8344d4c7 name=/runtime.v1.RuntimeService/Version
	Dec 16 21:18:43 old-k8s-version-847766 crio[626]: time="2024-12-16 21:18:43.759725160Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=881324f4-95ee-48b9-98fe-629e8344d4c7 name=/runtime.v1.RuntimeService/Version
	Dec 16 21:18:43 old-k8s-version-847766 crio[626]: time="2024-12-16 21:18:43.760803246Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cff5a185-0ce5-4d70-8444-cbc5d8609f0b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:18:43 old-k8s-version-847766 crio[626]: time="2024-12-16 21:18:43.761190779Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734383923761165827,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cff5a185-0ce5-4d70-8444-cbc5d8609f0b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 21:18:43 old-k8s-version-847766 crio[626]: time="2024-12-16 21:18:43.761717111Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=52c5aa18-7f7e-4420-a04c-ad240b458704 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:18:43 old-k8s-version-847766 crio[626]: time="2024-12-16 21:18:43.761783907Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=52c5aa18-7f7e-4420-a04c-ad240b458704 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 21:18:43 old-k8s-version-847766 crio[626]: time="2024-12-16 21:18:43.761824317Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=52c5aa18-7f7e-4420-a04c-ad240b458704 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 20:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053004] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042792] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.137612] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.914611] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.669532] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000014] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.090745] systemd-fstab-generator[555]: Ignoring "noauto" option for root device
	[  +0.063238] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068057] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.211871] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.132194] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.273053] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[Dec16 21:00] systemd-fstab-generator[876]: Ignoring "noauto" option for root device
	[  +0.063116] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.314286] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[ +12.945441] kauditd_printk_skb: 46 callbacks suppressed
	[Dec16 21:04] systemd-fstab-generator[4991]: Ignoring "noauto" option for root device
	[Dec16 21:06] systemd-fstab-generator[5267]: Ignoring "noauto" option for root device
	[  +0.075796] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 21:18:43 up 19 min,  0 users,  load average: 0.10, 0.05, 0.04
	Linux old-k8s-version-847766 5.10.207 #1 SMP Thu Dec 12 23:38:00 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Dec 16 21:18:43 old-k8s-version-847766 kubelet[6696]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000b20f20, 0x4f7fe00, 0xc000128018, 0x48ab5d6, 0x3, 0xc000baa540, 0x24, 0x60, 0x7f6fc369c1f8, 0x118, ...)
	Dec 16 21:18:43 old-k8s-version-847766 kubelet[6696]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Dec 16 21:18:43 old-k8s-version-847766 kubelet[6696]: net/http.(*Transport).dial(0xc000620f00, 0x4f7fe00, 0xc000128018, 0x48ab5d6, 0x3, 0xc000baa540, 0x24, 0x0, 0x0, 0x0, ...)
	Dec 16 21:18:43 old-k8s-version-847766 kubelet[6696]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Dec 16 21:18:43 old-k8s-version-847766 kubelet[6696]: net/http.(*Transport).dialConn(0xc000620f00, 0x4f7fe00, 0xc000128018, 0x0, 0xc00035a480, 0x5, 0xc000baa540, 0x24, 0x0, 0xc000c127e0, ...)
	Dec 16 21:18:43 old-k8s-version-847766 kubelet[6696]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Dec 16 21:18:43 old-k8s-version-847766 kubelet[6696]: net/http.(*Transport).dialConnFor(0xc000620f00, 0xc000ba5600)
	Dec 16 21:18:43 old-k8s-version-847766 kubelet[6696]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Dec 16 21:18:43 old-k8s-version-847766 kubelet[6696]: created by net/http.(*Transport).queueForDial
	Dec 16 21:18:43 old-k8s-version-847766 kubelet[6696]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Dec 16 21:18:43 old-k8s-version-847766 kubelet[6696]: goroutine 170 [select]:
	Dec 16 21:18:43 old-k8s-version-847766 kubelet[6696]: net.(*netFD).connect.func2(0x4f7fe40, 0xc000bf1a40, 0xc0003d0700, 0xc000111ec0, 0xc000111e60)
	Dec 16 21:18:43 old-k8s-version-847766 kubelet[6696]:         /usr/local/go/src/net/fd_unix.go:118 +0xc5
	Dec 16 21:18:43 old-k8s-version-847766 kubelet[6696]: created by net.(*netFD).connect
	Dec 16 21:18:43 old-k8s-version-847766 kubelet[6696]:         /usr/local/go/src/net/fd_unix.go:117 +0x234
	Dec 16 21:18:43 old-k8s-version-847766 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Dec 16 21:18:43 old-k8s-version-847766 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 21:18:43 old-k8s-version-847766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 131.
	Dec 16 21:18:43 old-k8s-version-847766 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 16 21:18:43 old-k8s-version-847766 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 16 21:18:43 old-k8s-version-847766 kubelet[6757]: I1216 21:18:43.794790    6757 server.go:416] Version: v1.20.0
	Dec 16 21:18:43 old-k8s-version-847766 kubelet[6757]: I1216 21:18:43.795185    6757 server.go:837] Client rotation is on, will bootstrap in background
	Dec 16 21:18:43 old-k8s-version-847766 kubelet[6757]: I1216 21:18:43.797986    6757 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Dec 16 21:18:43 old-k8s-version-847766 kubelet[6757]: W1216 21:18:43.799658    6757 manager.go:159] Cannot detect current cgroup on cgroup v2
	Dec 16 21:18:43 old-k8s-version-847766 kubelet[6757]: I1216 21:18:43.799842    6757 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-847766 -n old-k8s-version-847766
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-847766 -n old-k8s-version-847766: exit status 2 (243.672469ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-847766" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (93.92s)

                                                
                                    

Test pass (253/314)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.63
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.32.0/json-events 5.05
13 TestDownloadOnly/v1.32.0/preload-exists 0
17 TestDownloadOnly/v1.32.0/LogsDuration 0.06
18 TestDownloadOnly/v1.32.0/DeleteAll 0.14
19 TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.6
22 TestOffline 54.72
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 136.4
31 TestAddons/serial/GCPAuth/Namespaces 0.15
32 TestAddons/serial/GCPAuth/FakeCredentials 7.52
35 TestAddons/parallel/Registry 19.9
37 TestAddons/parallel/InspektorGadget 11.83
38 TestAddons/parallel/MetricsServer 7.22
40 TestAddons/parallel/CSI 48.18
41 TestAddons/parallel/Headlamp 23.25
42 TestAddons/parallel/CloudSpanner 5.59
43 TestAddons/parallel/LocalPath 10.21
44 TestAddons/parallel/NvidiaDevicePlugin 7.02
45 TestAddons/parallel/Yakd 12.73
47 TestAddons/StoppedEnableDisable 91.26
48 TestCertOptions 82.74
49 TestCertExpiration 308.3
51 TestForceSystemdFlag 73.17
52 TestForceSystemdEnv 48.16
54 TestKVMDriverInstallOrUpdate 3.39
58 TestErrorSpam/setup 42.23
59 TestErrorSpam/start 0.36
60 TestErrorSpam/status 0.76
61 TestErrorSpam/pause 1.69
62 TestErrorSpam/unpause 1.85
63 TestErrorSpam/stop 5.38
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 58.12
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 38.24
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.53
75 TestFunctional/serial/CacheCmd/cache/add_local 1.47
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.76
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.1
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
83 TestFunctional/serial/ExtraConfig 371.93
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.47
86 TestFunctional/serial/LogsFileCmd 1.52
87 TestFunctional/serial/InvalidService 4.38
89 TestFunctional/parallel/ConfigCmd 0.35
90 TestFunctional/parallel/DashboardCmd 20.28
91 TestFunctional/parallel/DryRun 0.31
92 TestFunctional/parallel/InternationalLanguage 0.15
93 TestFunctional/parallel/StatusCmd 0.97
97 TestFunctional/parallel/ServiceCmdConnect 15.47
98 TestFunctional/parallel/AddonsCmd 0.13
99 TestFunctional/parallel/PersistentVolumeClaim 45.82
101 TestFunctional/parallel/SSHCmd 0.45
102 TestFunctional/parallel/CpCmd 1.85
103 TestFunctional/parallel/MySQL 29.35
104 TestFunctional/parallel/FileSync 0.25
105 TestFunctional/parallel/CertSync 1.25
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
113 TestFunctional/parallel/License 0.18
114 TestFunctional/parallel/Version/short 0.05
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
118 TestFunctional/parallel/Version/components 0.52
119 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
120 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
121 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
122 TestFunctional/parallel/ImageCommands/ImageBuild 3.23
123 TestFunctional/parallel/ImageCommands/Setup 0.96
131 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
132 TestFunctional/parallel/ProfileCmd/profile_list 0.37
133 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.78
135 TestFunctional/parallel/ServiceCmd/DeployApp 9.28
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.35
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.22
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.56
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.88
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.58
142 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
143 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
144 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
145 TestFunctional/parallel/ServiceCmd/List 0.44
146 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
147 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
148 TestFunctional/parallel/ServiceCmd/Format 0.39
149 TestFunctional/parallel/ServiceCmd/URL 0.32
150 TestFunctional/parallel/MountCmd/any-port 20.77
151 TestFunctional/parallel/MountCmd/specific-port 1.82
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.68
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 207.99
160 TestMultiControlPlane/serial/DeployApp 5.42
161 TestMultiControlPlane/serial/PingHostFromPods 1.22
162 TestMultiControlPlane/serial/AddWorkerNode 56.66
163 TestMultiControlPlane/serial/NodeLabels 0.07
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.89
165 TestMultiControlPlane/serial/CopyFile 13.2
166 TestMultiControlPlane/serial/StopSecondaryNode 91.68
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.65
168 TestMultiControlPlane/serial/RestartSecondaryNode 46.7
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.87
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 467.76
171 TestMultiControlPlane/serial/DeleteSecondaryNode 18.51
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.64
173 TestMultiControlPlane/serial/StopCluster 272.78
174 TestMultiControlPlane/serial/RestartCluster 120.26
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.64
176 TestMultiControlPlane/serial/AddSecondaryNode 77.96
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.89
181 TestJSONOutput/start/Command 81.68
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.72
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.67
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 23.4
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.21
209 TestMainNoArgs 0.05
210 TestMinikubeProfile 94.36
213 TestMountStart/serial/StartWithMountFirst 27.88
214 TestMountStart/serial/VerifyMountFirst 0.39
215 TestMountStart/serial/StartWithMountSecond 27.99
216 TestMountStart/serial/VerifyMountSecond 0.38
217 TestMountStart/serial/DeleteFirst 0.71
218 TestMountStart/serial/VerifyMountPostDelete 0.39
219 TestMountStart/serial/Stop 1.32
220 TestMountStart/serial/RestartStopped 21.29
221 TestMountStart/serial/VerifyMountPostStop 0.39
224 TestMultiNode/serial/FreshStart2Nodes 112.75
225 TestMultiNode/serial/DeployApp2Nodes 4.07
226 TestMultiNode/serial/PingHostFrom2Pods 0.81
227 TestMultiNode/serial/AddNode 52.57
228 TestMultiNode/serial/MultiNodeLabels 0.06
229 TestMultiNode/serial/ProfileList 0.59
230 TestMultiNode/serial/CopyFile 7.37
231 TestMultiNode/serial/StopNode 2.37
232 TestMultiNode/serial/StartAfterStop 38.82
233 TestMultiNode/serial/RestartKeepsNodes 421.88
234 TestMultiNode/serial/DeleteNode 2.49
235 TestMultiNode/serial/StopMultiNode 181.91
236 TestMultiNode/serial/RestartMultiNode 116.54
237 TestMultiNode/serial/ValidateNameConflict 46.35
244 TestScheduledStopUnix 114.21
248 TestRunningBinaryUpgrade 221.09
253 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
254 TestNoKubernetes/serial/StartWithK8s 94.88
263 TestPause/serial/Start 138.89
264 TestNoKubernetes/serial/StartWithStopK8s 47.2
265 TestNoKubernetes/serial/Start 31.57
266 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
267 TestNoKubernetes/serial/ProfileList 30.45
269 TestNoKubernetes/serial/Stop 1.32
270 TestNoKubernetes/serial/StartNoArgs 25.82
278 TestNetworkPlugins/group/false 3.02
282 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.24
283 TestStoppedBinaryUpgrade/Setup 0.53
284 TestStoppedBinaryUpgrade/Upgrade 161.88
288 TestStartStop/group/no-preload/serial/FirstStart 107.81
289 TestStoppedBinaryUpgrade/MinikubeLogs 0.89
291 TestStartStop/group/embed-certs/serial/FirstStart 67.74
292 TestStartStop/group/embed-certs/serial/DeployApp 9.28
293 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.1
296 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 56.88
297 TestStartStop/group/no-preload/serial/DeployApp 8.33
298 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.03
300 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.27
301 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.03
306 TestStartStop/group/embed-certs/serial/SecondStart 671.85
308 TestStartStop/group/no-preload/serial/SecondStart 624.83
310 TestStartStop/group/old-k8s-version/serial/Stop 6.32
311 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 566.48
312 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
323 TestStartStop/group/newest-cni/serial/FirstStart 47.75
324 TestStartStop/group/newest-cni/serial/DeployApp 0
325 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.17
326 TestStartStop/group/newest-cni/serial/Stop 7.36
327 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
328 TestStartStop/group/newest-cni/serial/SecondStart 37.84
329 TestNetworkPlugins/group/auto/Start 55.18
330 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
331 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
333 TestStartStop/group/newest-cni/serial/Pause 2.68
334 TestNetworkPlugins/group/kindnet/Start 84.59
335 TestNetworkPlugins/group/calico/Start 121.85
336 TestNetworkPlugins/group/auto/KubeletFlags 0.25
337 TestNetworkPlugins/group/auto/NetCatPod 11.94
338 TestNetworkPlugins/group/auto/DNS 0.16
339 TestNetworkPlugins/group/auto/Localhost 0.13
340 TestNetworkPlugins/group/auto/HairPin 0.13
341 TestNetworkPlugins/group/custom-flannel/Start 74.92
342 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
343 TestNetworkPlugins/group/kindnet/KubeletFlags 0.25
344 TestNetworkPlugins/group/flannel/Start 102.08
345 TestNetworkPlugins/group/kindnet/NetCatPod 9.51
346 TestNetworkPlugins/group/kindnet/DNS 0.19
347 TestNetworkPlugins/group/kindnet/Localhost 0.13
348 TestNetworkPlugins/group/kindnet/HairPin 0.12
349 TestNetworkPlugins/group/bridge/Start 107.8
350 TestNetworkPlugins/group/calico/ControllerPod 6.01
351 TestNetworkPlugins/group/calico/KubeletFlags 0.22
352 TestNetworkPlugins/group/calico/NetCatPod 11.44
353 TestNetworkPlugins/group/calico/DNS 0.18
354 TestNetworkPlugins/group/calico/Localhost 0.15
355 TestNetworkPlugins/group/calico/HairPin 0.16
356 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
357 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.37
358 TestNetworkPlugins/group/custom-flannel/DNS 0.26
359 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
360 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
361 TestNetworkPlugins/group/enable-default-cni/Start 83.91
362 TestNetworkPlugins/group/flannel/ControllerPod 6.09
363 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
364 TestNetworkPlugins/group/flannel/NetCatPod 11.53
365 TestNetworkPlugins/group/flannel/DNS 0.15
366 TestNetworkPlugins/group/flannel/Localhost 0.14
367 TestNetworkPlugins/group/flannel/HairPin 0.13
368 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
369 TestNetworkPlugins/group/bridge/NetCatPod 11.24
370 TestNetworkPlugins/group/bridge/DNS 0.15
371 TestNetworkPlugins/group/bridge/Localhost 0.13
372 TestNetworkPlugins/group/bridge/HairPin 0.13
373 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
374 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.25
375 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
376 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
377 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (8.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-646102 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-646102 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (8.631378392s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1216 19:34:50.195444   14254 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1216 19:34:50.195524   14254 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-646102
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-646102: exit status 85 (62.483637ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-646102 | jenkins | v1.34.0 | 16 Dec 24 19:34 UTC |          |
	|         | -p download-only-646102        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 19:34:41
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 19:34:41.602715   14267 out.go:345] Setting OutFile to fd 1 ...
	I1216 19:34:41.602942   14267 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 19:34:41.602950   14267 out.go:358] Setting ErrFile to fd 2...
	I1216 19:34:41.602954   14267 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 19:34:41.603140   14267 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-7083/.minikube/bin
	W1216 19:34:41.603274   14267 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20091-7083/.minikube/config/config.json: open /home/jenkins/minikube-integration/20091-7083/.minikube/config/config.json: no such file or directory
	I1216 19:34:41.603821   14267 out.go:352] Setting JSON to true
	I1216 19:34:41.604707   14267 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1027,"bootTime":1734376655,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 19:34:41.604767   14267 start.go:139] virtualization: kvm guest
	I1216 19:34:41.607479   14267 out.go:97] [download-only-646102] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W1216 19:34:41.607597   14267 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball: no such file or directory
	I1216 19:34:41.607636   14267 notify.go:220] Checking for updates...
	I1216 19:34:41.609186   14267 out.go:169] MINIKUBE_LOCATION=20091
	I1216 19:34:41.610552   14267 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 19:34:41.612497   14267 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 19:34:41.613986   14267 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-7083/.minikube
	I1216 19:34:41.615405   14267 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1216 19:34:41.617815   14267 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1216 19:34:41.618029   14267 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 19:34:41.717441   14267 out.go:97] Using the kvm2 driver based on user configuration
	I1216 19:34:41.717484   14267 start.go:297] selected driver: kvm2
	I1216 19:34:41.717493   14267 start.go:901] validating driver "kvm2" against <nil>
	I1216 19:34:41.717821   14267 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 19:34:41.717949   14267 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20091-7083/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1216 19:34:41.734285   14267 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1216 19:34:41.734350   14267 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 19:34:41.734883   14267 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1216 19:34:41.735036   14267 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 19:34:41.735064   14267 cni.go:84] Creating CNI manager for ""
	I1216 19:34:41.735107   14267 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 19:34:41.735116   14267 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 19:34:41.735186   14267 start.go:340] cluster config:
	{Name:download-only-646102 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-646102 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 19:34:41.735437   14267 iso.go:125] acquiring lock: {Name:mk60ed2ba7ed00047edacd09f4f6bf84214f0831 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 19:34:41.737408   14267 out.go:97] Downloading VM boot image ...
	I1216 19:34:41.737447   14267 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20091-7083/.minikube/cache/iso/amd64/minikube-v1.34.0-1734029574-20090-amd64.iso
	I1216 19:34:45.216259   14267 out.go:97] Starting "download-only-646102" primary control-plane node in "download-only-646102" cluster
	I1216 19:34:45.216285   14267 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1216 19:34:45.258004   14267 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1216 19:34:45.258030   14267 cache.go:56] Caching tarball of preloaded images
	I1216 19:34:45.258188   14267 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1216 19:34:45.260105   14267 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1216 19:34:45.260126   14267 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1216 19:34:45.289144   14267 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-646102 host does not exist
	  To start a cluster, run: "minikube start -p download-only-646102"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-646102
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/json-events (5.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-654038 --force --alsologtostderr --kubernetes-version=v1.32.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-654038 --force --alsologtostderr --kubernetes-version=v1.32.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (5.047172168s)
--- PASS: TestDownloadOnly/v1.32.0/json-events (5.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/preload-exists
I1216 19:34:55.576124   14254 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
I1216 19:34:55.576168   14254 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-654038
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-654038: exit status 85 (60.78161ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-646102 | jenkins | v1.34.0 | 16 Dec 24 19:34 UTC |                     |
	|         | -p download-only-646102        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 16 Dec 24 19:34 UTC | 16 Dec 24 19:34 UTC |
	| delete  | -p download-only-646102        | download-only-646102 | jenkins | v1.34.0 | 16 Dec 24 19:34 UTC | 16 Dec 24 19:34 UTC |
	| start   | -o=json --download-only        | download-only-654038 | jenkins | v1.34.0 | 16 Dec 24 19:34 UTC |                     |
	|         | -p download-only-654038        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 19:34:50
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 19:34:50.569869   14477 out.go:345] Setting OutFile to fd 1 ...
	I1216 19:34:50.570027   14477 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 19:34:50.570040   14477 out.go:358] Setting ErrFile to fd 2...
	I1216 19:34:50.570046   14477 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 19:34:50.570220   14477 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-7083/.minikube/bin
	I1216 19:34:50.570796   14477 out.go:352] Setting JSON to true
	I1216 19:34:50.571640   14477 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1036,"bootTime":1734376655,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 19:34:50.571741   14477 start.go:139] virtualization: kvm guest
	I1216 19:34:50.573981   14477 out.go:97] [download-only-654038] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1216 19:34:50.574196   14477 notify.go:220] Checking for updates...
	I1216 19:34:50.575764   14477 out.go:169] MINIKUBE_LOCATION=20091
	I1216 19:34:50.577141   14477 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 19:34:50.578727   14477 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 19:34:50.580232   14477 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-7083/.minikube
	I1216 19:34:50.581779   14477 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-654038 host does not exist
	  To start a cluster, run: "minikube start -p download-only-654038"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-654038
--- PASS: TestDownloadOnly/v1.32.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I1216 19:34:56.172591   14254 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-010223 --alsologtostderr --binary-mirror http://127.0.0.1:42673 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-010223" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-010223
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestOffline (54.72s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-531631 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-531631 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (53.576982533s)
helpers_test.go:175: Cleaning up "offline-crio-531631" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-531631
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-531631: (1.144016223s)
--- PASS: TestOffline (54.72s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-618388
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-618388: exit status 85 (50.588688ms)

                                                
                                                
-- stdout --
	* Profile "addons-618388" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-618388"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-618388
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-618388: exit status 85 (53.292334ms)

                                                
                                                
-- stdout --
	* Profile "addons-618388" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-618388"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (136.4s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-618388 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-618388 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m16.399310223s)
--- PASS: TestAddons/Setup (136.40s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-618388 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-618388 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (7.52s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-618388 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-618388 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [12fe933d-2f3a-4b23-9e9d-2faa73db353b] Pending
helpers_test.go:344: "busybox" [12fe933d-2f3a-4b23-9e9d-2faa73db353b] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 7.004031541s
addons_test.go:633: (dbg) Run:  kubectl --context addons-618388 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-618388 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-618388 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (7.52s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 5.831405ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c86875c6f-lxvbn" [ec5514ad-5010-4fd5-bae5-fa96610b47b8] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005879364s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-49ln5" [29c16cb5-dd77-4e42-a748-3d4a7a80fb9c] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.057986664s
addons_test.go:331: (dbg) Run:  kubectl --context addons-618388 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-618388 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-618388 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.62523857s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-618388 ip
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-618388 addons disable registry --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-618388 addons disable registry --alsologtostderr -v=1: (1.040976522s)
--- PASS: TestAddons/parallel/Registry (19.90s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.83s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-v7tkq" [306dad49-4ab5-4884-b9ad-6daf4a537cb0] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.007618181s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-618388 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-618388 addons disable inspektor-gadget --alsologtostderr -v=1: (5.817970832s)
--- PASS: TestAddons/parallel/InspektorGadget (11.83s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.22s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 4.414611ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-c995d" [4213f921-b992-420b-bd80-e0ad67a43567] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.007024155s
addons_test.go:402: (dbg) Run:  kubectl --context addons-618388 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-618388 addons disable metrics-server --alsologtostderr -v=1
2024/12/16 19:37:48 [DEBUG] GET http://192.168.39.82:5000
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-618388 addons disable metrics-server --alsologtostderr -v=1: (1.136913956s)
--- PASS: TestAddons/parallel/MetricsServer (7.22s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.18s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1216 19:37:36.785017   14254 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1216 19:37:36.807096   14254 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1216 19:37:36.807137   14254 kapi.go:107] duration metric: took 22.129242ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 22.145141ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-618388 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-618388 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-618388 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-618388 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [7fd5a5e8-f791-46f5-a6c3-8728729491bd] Pending
helpers_test.go:344: "task-pv-pod" [7fd5a5e8-f791-46f5-a6c3-8728729491bd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [7fd5a5e8-f791-46f5-a6c3-8728729491bd] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.002952273s
addons_test.go:511: (dbg) Run:  kubectl --context addons-618388 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-618388 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-618388 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-618388 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-618388 delete pod task-pv-pod: (1.199231709s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-618388 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-618388 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-618388 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-618388 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-618388 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-618388 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-618388 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-618388 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-618388 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-618388 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-618388 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-618388 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-618388 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-618388 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-618388 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c331e026-63d4-4f10-a0c5-3bf7d22b1740] Pending
helpers_test.go:344: "task-pv-pod-restore" [c331e026-63d4-4f10-a0c5-3bf7d22b1740] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c331e026-63d4-4f10-a0c5-3bf7d22b1740] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003922986s
addons_test.go:553: (dbg) Run:  kubectl --context addons-618388 delete pod task-pv-pod-restore
addons_test.go:553: (dbg) Done: kubectl --context addons-618388 delete pod task-pv-pod-restore: (1.779227645s)
addons_test.go:557: (dbg) Run:  kubectl --context addons-618388 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-618388 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-618388 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-618388 addons disable volumesnapshots --alsologtostderr -v=1: (1.074832862s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-618388 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-618388 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.858863285s)
--- PASS: TestAddons/parallel/CSI (48.18s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (23.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-618388 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-69d78d796f-2hxsk" [51cd8c81-2d17-42a3-b3ae-b0244c7820ca] Pending
helpers_test.go:344: "headlamp-69d78d796f-2hxsk" [51cd8c81-2d17-42a3-b3ae-b0244c7820ca] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-69d78d796f-2hxsk" [51cd8c81-2d17-42a3-b3ae-b0244c7820ca] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 16.00607511s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-618388 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-618388 addons disable headlamp --alsologtostderr -v=1: (6.28465139s)
--- PASS: TestAddons/parallel/Headlamp (23.25s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5498fbc9c4-g9dsk" [a549c60e-50eb-4fe8-af60-a4bbaefdddb1] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004502694s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-618388 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.21s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-618388 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-618388 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-618388 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-618388 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-618388 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-618388 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-618388 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-618388 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [1263c96d-3cbe-4d81-b239-4a609bac36c0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [1263c96d-3cbe-4d81-b239-4a609bac36c0] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [1263c96d-3cbe-4d81-b239-4a609bac36c0] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003556745s
addons_test.go:906: (dbg) Run:  kubectl --context addons-618388 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-618388 ssh "cat /opt/local-path-provisioner/pvc-4e008b7b-de06-41f9-8097-3d4fc784c52a_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-618388 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-618388 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-618388 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (10.21s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (7.02s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-fmpb4" [e8d4bb90-d999-45bf-96e0-304cf36a3790] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004230789s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-618388 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-618388 addons disable nvidia-device-plugin --alsologtostderr -v=1: (1.014469723s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (7.02s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-qxk5s" [658a9af8-0be8-428a-aca9-b2124f9ff5c3] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004212923s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-618388 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-618388 addons disable yakd --alsologtostderr -v=1: (6.727967421s)
--- PASS: TestAddons/parallel/Yakd (12.73s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.26s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-618388
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-618388: (1m30.981393809s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-618388
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-618388
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-618388
--- PASS: TestAddons/StoppedEnableDisable (91.26s)

                                                
                                    
x
+
TestCertOptions (82.74s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-254143 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-254143 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m21.406657914s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-254143 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-254143 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-254143 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-254143" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-254143
--- PASS: TestCertOptions (82.74s)

                                                
                                    
x
+
TestCertExpiration (308.3s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-270954 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
E1216 20:47:13.883911   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-270954 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m26.93251825s)
E1216 20:48:36.959066   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-270954 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-270954 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (40.506631405s)
helpers_test.go:175: Cleaning up "cert-expiration-270954" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-270954
--- PASS: TestCertExpiration (308.30s)

                                                
                                    
x
+
TestForceSystemdFlag (73.17s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-406516 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-406516 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m11.992356726s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-406516 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-406516" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-406516
--- PASS: TestForceSystemdFlag (73.17s)

                                                
                                    
x
+
TestForceSystemdEnv (48.16s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-893512 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
I1216 20:46:31.194101   14254 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1665397985/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x532a240 0x532a240 0x532a240 0x532a240 0x532a240 0x532a240 0x532a240] Decompressors:map[bz2:0xc0004b4d80 gz:0xc0004b4d88 tar:0xc0004b4d20 tar.bz2:0xc0004b4d30 tar.gz:0xc0004b4d40 tar.xz:0xc0004b4d50 tar.zst:0xc0004b4d60 tbz2:0xc0004b4d30 tgz:0xc0004b4d40 txz:0xc0004b4d50 tzst:0xc0004b4d60 xz:0xc0004b4da0 zip:0xc0004b4db0 zst:0xc0004b4da8] Getters:map[file:0xc001bc0120 http:0xc00067a870 https:0xc00067a8c0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1216 20:46:31.194159   14254 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1665397985/002/docker-machine-driver-kvm2
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-893512 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (47.385540497s)
helpers_test.go:175: Cleaning up "force-systemd-env-893512" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-893512
--- PASS: TestForceSystemdEnv (48.16s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.39s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1216 20:46:29.092198   14254 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1216 20:46:29.092343   14254 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1216 20:46:29.119793   14254 install.go:62] docker-machine-driver-kvm2: exit status 1
W1216 20:46:29.120171   14254 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1216 20:46:29.120249   14254 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1665397985/001/docker-machine-driver-kvm2
I1216 20:46:29.394919   14254 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1665397985/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x532a240 0x532a240 0x532a240 0x532a240 0x532a240 0x532a240 0x532a240] Decompressors:map[bz2:0xc0004b4d80 gz:0xc0004b4d88 tar:0xc0004b4d20 tar.bz2:0xc0004b4d30 tar.gz:0xc0004b4d40 tar.xz:0xc0004b4d50 tar.zst:0xc0004b4d60 tbz2:0xc0004b4d30 tgz:0xc0004b4d40 txz:0xc0004b4d50 tzst:0xc0004b4d60 xz:0xc0004b4da0 zip:0xc0004b4db0 zst:0xc0004b4da8] Getters:map[file:0xc001cef700 http:0xc000168d20 https:0xc000168d70] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1216 20:46:29.394976   14254 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1665397985/001/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (3.39s)

                                                
                                    
x
+
TestErrorSpam/setup (42.23s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-761932 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-761932 --driver=kvm2  --container-runtime=crio
E1216 19:42:13.884205   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:42:13.890514   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:42:13.901980   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:42:13.923928   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:42:13.965394   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:42:14.046949   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:42:14.208480   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:42:14.530208   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:42:15.172260   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:42:16.453898   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:42:19.015407   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:42:24.137202   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:42:34.379227   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-761932 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-761932 --driver=kvm2  --container-runtime=crio: (42.226015433s)
--- PASS: TestErrorSpam/setup (42.23s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-761932 --log_dir /tmp/nospam-761932 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-761932 --log_dir /tmp/nospam-761932 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-761932 --log_dir /tmp/nospam-761932 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-761932 --log_dir /tmp/nospam-761932 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-761932 --log_dir /tmp/nospam-761932 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-761932 --log_dir /tmp/nospam-761932 status
--- PASS: TestErrorSpam/status (0.76s)

                                                
                                    
x
+
TestErrorSpam/pause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-761932 --log_dir /tmp/nospam-761932 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-761932 --log_dir /tmp/nospam-761932 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-761932 --log_dir /tmp/nospam-761932 pause
--- PASS: TestErrorSpam/pause (1.69s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.85s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-761932 --log_dir /tmp/nospam-761932 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-761932 --log_dir /tmp/nospam-761932 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-761932 --log_dir /tmp/nospam-761932 unpause
--- PASS: TestErrorSpam/unpause (1.85s)

                                                
                                    
x
+
TestErrorSpam/stop (5.38s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-761932 --log_dir /tmp/nospam-761932 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-761932 --log_dir /tmp/nospam-761932 stop: (2.370847262s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-761932 --log_dir /tmp/nospam-761932 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-761932 --log_dir /tmp/nospam-761932 stop: (1.928995815s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-761932 --log_dir /tmp/nospam-761932 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-761932 --log_dir /tmp/nospam-761932 stop: (1.080575104s)
--- PASS: TestErrorSpam/stop (5.38s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20091-7083/.minikube/files/etc/test/nested/copy/14254/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (58.12s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-782219 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1216 19:42:54.861383   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:43:35.823087   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-782219 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (58.115669722s)
--- PASS: TestFunctional/serial/StartWithProxy (58.12s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.24s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1216 19:43:44.985225   14254 config.go:182] Loaded profile config "functional-782219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-782219 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-782219 --alsologtostderr -v=8: (38.236337693s)
functional_test.go:663: soft start took 38.237069707s for "functional-782219" cluster.
I1216 19:44:23.221927   14254 config.go:182] Loaded profile config "functional-782219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
--- PASS: TestFunctional/serial/SoftStart (38.24s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-782219 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-782219 cache add registry.k8s.io/pause:3.1: (1.155894232s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-782219 cache add registry.k8s.io/pause:3.3: (1.226412689s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-782219 cache add registry.k8s.io/pause:latest: (1.151725585s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-782219 /tmp/TestFunctionalserialCacheCmdcacheadd_local1589485841/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 cache add minikube-local-cache-test:functional-782219
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-782219 cache add minikube-local-cache-test:functional-782219: (1.14010422s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 cache delete minikube-local-cache-test:functional-782219
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-782219
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.76s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-782219 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (207.891224ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-782219 cache reload: (1.043542454s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.76s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 kubectl -- --context functional-782219 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-782219 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (371.93s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-782219 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1216 19:44:57.747478   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:47:13.885098   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:47:41.588997   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-782219 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (6m11.928294443s)
functional_test.go:761: restart took 6m11.928438933s for "functional-782219" cluster.
I1216 19:50:42.687582   14254 config.go:182] Loaded profile config "functional-782219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
--- PASS: TestFunctional/serial/ExtraConfig (371.93s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-782219 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-782219 logs: (1.473902474s)
--- PASS: TestFunctional/serial/LogsCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 logs --file /tmp/TestFunctionalserialLogsFileCmd2321709319/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-782219 logs --file /tmp/TestFunctionalserialLogsFileCmd2321709319/001/logs.txt: (1.519766022s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.52s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.38s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-782219 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-782219
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-782219: exit status 115 (276.885216ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.175:32708 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-782219 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.38s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-782219 config get cpus: exit status 14 (54.137305ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-782219 config get cpus: exit status 14 (53.1247ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (20.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-782219 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-782219 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 23635: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (20.28s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-782219 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-782219 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (151.592831ms)

                                                
                                                
-- stdout --
	* [functional-782219] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20091
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20091-7083/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-7083/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 19:51:07.407429   23268 out.go:345] Setting OutFile to fd 1 ...
	I1216 19:51:07.407543   23268 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 19:51:07.407551   23268 out.go:358] Setting ErrFile to fd 2...
	I1216 19:51:07.407556   23268 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 19:51:07.407726   23268 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-7083/.minikube/bin
	I1216 19:51:07.408231   23268 out.go:352] Setting JSON to false
	I1216 19:51:07.409171   23268 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2012,"bootTime":1734376655,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 19:51:07.409285   23268 start.go:139] virtualization: kvm guest
	I1216 19:51:07.411452   23268 out.go:177] * [functional-782219] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1216 19:51:07.413023   23268 notify.go:220] Checking for updates...
	I1216 19:51:07.413075   23268 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 19:51:07.414564   23268 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 19:51:07.416226   23268 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 19:51:07.417649   23268 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-7083/.minikube
	I1216 19:51:07.419115   23268 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 19:51:07.420385   23268 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 19:51:07.422036   23268 config.go:182] Loaded profile config "functional-782219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 19:51:07.422471   23268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:51:07.422539   23268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:51:07.438792   23268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42447
	I1216 19:51:07.439308   23268 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:51:07.439869   23268 main.go:141] libmachine: Using API Version  1
	I1216 19:51:07.439895   23268 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:51:07.440206   23268 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:51:07.440352   23268 main.go:141] libmachine: (functional-782219) Calling .DriverName
	I1216 19:51:07.440595   23268 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 19:51:07.440894   23268 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:51:07.440934   23268 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:51:07.456447   23268 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39795
	I1216 19:51:07.457010   23268 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:51:07.457517   23268 main.go:141] libmachine: Using API Version  1
	I1216 19:51:07.457550   23268 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:51:07.457850   23268 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:51:07.458034   23268 main.go:141] libmachine: (functional-782219) Calling .DriverName
	I1216 19:51:07.493925   23268 out.go:177] * Using the kvm2 driver based on existing profile
	I1216 19:51:07.495436   23268 start.go:297] selected driver: kvm2
	I1216 19:51:07.495454   23268 start.go:901] validating driver "kvm2" against &{Name:functional-782219 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.32.0 ClusterName:functional-782219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.175 Port:8441 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 19:51:07.495557   23268 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 19:51:07.497911   23268 out.go:201] 
	W1216 19:51:07.499320   23268 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1216 19:51:07.500626   23268 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-782219 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-782219 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-782219 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (146.970354ms)

                                                
                                                
-- stdout --
	* [functional-782219] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20091
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20091-7083/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-7083/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 19:51:02.965474   22919 out.go:345] Setting OutFile to fd 1 ...
	I1216 19:51:02.965640   22919 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 19:51:02.965652   22919 out.go:358] Setting ErrFile to fd 2...
	I1216 19:51:02.965658   22919 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 19:51:02.966053   22919 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-7083/.minikube/bin
	I1216 19:51:02.966776   22919 out.go:352] Setting JSON to false
	I1216 19:51:02.968060   22919 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2008,"bootTime":1734376655,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 19:51:02.968186   22919 start.go:139] virtualization: kvm guest
	I1216 19:51:02.970805   22919 out.go:177] * [functional-782219] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1216 19:51:02.972467   22919 notify.go:220] Checking for updates...
	I1216 19:51:02.972477   22919 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 19:51:02.974120   22919 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 19:51:02.975541   22919 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 19:51:02.976914   22919 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-7083/.minikube
	I1216 19:51:02.978349   22919 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 19:51:02.979849   22919 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 19:51:02.981804   22919 config.go:182] Loaded profile config "functional-782219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 19:51:02.982381   22919 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:51:02.982444   22919 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:51:02.997810   22919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44345
	I1216 19:51:02.998305   22919 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:51:02.998966   22919 main.go:141] libmachine: Using API Version  1
	I1216 19:51:02.998992   22919 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:51:02.999395   22919 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:51:02.999629   22919 main.go:141] libmachine: (functional-782219) Calling .DriverName
	I1216 19:51:02.999944   22919 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 19:51:03.000422   22919 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:51:03.000507   22919 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:51:03.015540   22919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33121
	I1216 19:51:03.016015   22919 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:51:03.016522   22919 main.go:141] libmachine: Using API Version  1
	I1216 19:51:03.016545   22919 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:51:03.016854   22919 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:51:03.017051   22919 main.go:141] libmachine: (functional-782219) Calling .DriverName
	I1216 19:51:03.050213   22919 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1216 19:51:03.051902   22919 start.go:297] selected driver: kvm2
	I1216 19:51:03.051918   22919 start.go:901] validating driver "kvm2" against &{Name:functional-782219 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.32.0 ClusterName:functional-782219 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.175 Port:8441 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 19:51:03.052027   22919 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 19:51:03.054191   22919 out.go:201] 
	W1216 19:51:03.055491   22919 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1216 19:51:03.056819   22919 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (15.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-782219 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-782219 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-njmsh" [aa6565d8-74b0-4844-83d3-6bd35dab9b5d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-njmsh" [aa6565d8-74b0-4844-83d3-6bd35dab9b5d] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 15.005480959s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.175:32381
functional_test.go:1675: http://192.168.39.175:32381: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-njmsh

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.175:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.175:32381
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (15.47s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (45.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [317a5051-6710-484c-a000-ff0d67182525] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004441357s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-782219 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-782219 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-782219 get pvc myclaim -o=json
I1216 19:50:56.733063   14254 retry.go:31] will retry after 2.429041534s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:8f92934b-7f09-4ec5-89bb-7f030b3c350e ResourceVersion:478 Generation:0 CreationTimestamp:2024-12-16 19:50:56 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001a84120 VolumeMode:0xc001a84130 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-782219 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-782219 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c389f3ef-142b-4eba-966f-1840870e3ed9] Pending
helpers_test.go:344: "sp-pod" [c389f3ef-142b-4eba-966f-1840870e3ed9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c389f3ef-142b-4eba-966f-1840870e3ed9] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.020762291s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-782219 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-782219 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-782219 delete -f testdata/storage-provisioner/pod.yaml: (1.932898721s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-782219 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [cff31186-a192-49bc-a67a-46f2883dec98] Pending
helpers_test.go:344: "sp-pod" [cff31186-a192-49bc-a67a-46f2883dec98] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [cff31186-a192-49bc-a67a-46f2883dec98] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.005480845s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-782219 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (45.82s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 ssh -n functional-782219 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 cp functional-782219:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4251773428/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 ssh -n functional-782219 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 ssh -n functional-782219 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (29.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-782219 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-2w7pt" [46386766-e492-4774-ae72-f8f1858e21b0] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-2w7pt" [46386766-e492-4774-ae72-f8f1858e21b0] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.011346709s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-782219 exec mysql-58ccfd96bb-2w7pt -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-782219 exec mysql-58ccfd96bb-2w7pt -- mysql -ppassword -e "show databases;": exit status 1 (335.39876ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1216 19:51:24.847017   14254 retry.go:31] will retry after 501.15736ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-782219 exec mysql-58ccfd96bb-2w7pt -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-782219 exec mysql-58ccfd96bb-2w7pt -- mysql -ppassword -e "show databases;": exit status 1 (184.377252ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1216 19:51:25.533781   14254 retry.go:31] will retry after 1.290208332s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-782219 exec mysql-58ccfd96bb-2w7pt -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-782219 exec mysql-58ccfd96bb-2w7pt -- mysql -ppassword -e "show databases;": exit status 1 (270.815986ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1216 19:51:27.095694   14254 retry.go:31] will retry after 1.824646648s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-782219 exec mysql-58ccfd96bb-2w7pt -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (29.35s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/14254/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 ssh "sudo cat /etc/test/nested/copy/14254/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/14254.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 ssh "sudo cat /etc/ssl/certs/14254.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/14254.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 ssh "sudo cat /usr/share/ca-certificates/14254.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/142542.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 ssh "sudo cat /etc/ssl/certs/142542.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/142542.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 ssh "sudo cat /usr/share/ca-certificates/142542.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-782219 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-782219 ssh "sudo systemctl is-active docker": exit status 1 (227.655464ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-782219 ssh "sudo systemctl is-active containerd": exit status 1 (231.372984ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-782219 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.0
registry.k8s.io/kube-proxy:v1.32.0
registry.k8s.io/kube-controller-manager:v1.32.0
registry.k8s.io/kube-apiserver:v1.32.0
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-782219
localhost/kicbase/echo-server:functional-782219
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20241108-5c6d2daf
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-782219 image ls --format short --alsologtostderr:
I1216 19:51:27.940658   24235 out.go:345] Setting OutFile to fd 1 ...
I1216 19:51:27.940786   24235 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 19:51:27.940795   24235 out.go:358] Setting ErrFile to fd 2...
I1216 19:51:27.940800   24235 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 19:51:27.940989   24235 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-7083/.minikube/bin
I1216 19:51:27.941603   24235 config.go:182] Loaded profile config "functional-782219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I1216 19:51:27.941706   24235 config.go:182] Loaded profile config "functional-782219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I1216 19:51:27.942069   24235 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:51:27.942113   24235 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:51:27.962357   24235 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46433
I1216 19:51:27.962897   24235 main.go:141] libmachine: () Calling .GetVersion
I1216 19:51:27.963531   24235 main.go:141] libmachine: Using API Version  1
I1216 19:51:27.963557   24235 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:51:27.964013   24235 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:51:27.964200   24235 main.go:141] libmachine: (functional-782219) Calling .GetState
I1216 19:51:27.966352   24235 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:51:27.966396   24235 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:51:27.982028   24235 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40487
I1216 19:51:27.982539   24235 main.go:141] libmachine: () Calling .GetVersion
I1216 19:51:27.983056   24235 main.go:141] libmachine: Using API Version  1
I1216 19:51:27.983088   24235 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:51:27.983506   24235 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:51:27.983737   24235 main.go:141] libmachine: (functional-782219) Calling .DriverName
I1216 19:51:27.983930   24235 ssh_runner.go:195] Run: systemctl --version
I1216 19:51:27.983972   24235 main.go:141] libmachine: (functional-782219) Calling .GetSSHHostname
I1216 19:51:27.987069   24235 main.go:141] libmachine: (functional-782219) DBG | domain functional-782219 has defined MAC address 52:54:00:c1:1e:fc in network mk-functional-782219
I1216 19:51:27.987593   24235 main.go:141] libmachine: (functional-782219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:1e:fc", ip: ""} in network mk-functional-782219: {Iface:virbr1 ExpiryTime:2024-12-16 20:43:02 +0000 UTC Type:0 Mac:52:54:00:c1:1e:fc Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:functional-782219 Clientid:01:52:54:00:c1:1e:fc}
I1216 19:51:27.987632   24235 main.go:141] libmachine: (functional-782219) DBG | domain functional-782219 has defined IP address 192.168.39.175 and MAC address 52:54:00:c1:1e:fc in network mk-functional-782219
I1216 19:51:27.987790   24235 main.go:141] libmachine: (functional-782219) Calling .GetSSHPort
I1216 19:51:27.987986   24235 main.go:141] libmachine: (functional-782219) Calling .GetSSHKeyPath
I1216 19:51:27.988158   24235 main.go:141] libmachine: (functional-782219) Calling .GetSSHUsername
I1216 19:51:27.988326   24235 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/functional-782219/id_rsa Username:docker}
I1216 19:51:28.082251   24235 ssh_runner.go:195] Run: sudo crictl images --output json
I1216 19:51:28.139282   24235 main.go:141] libmachine: Making call to close driver server
I1216 19:51:28.139297   24235 main.go:141] libmachine: (functional-782219) Calling .Close
I1216 19:51:28.139570   24235 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:51:28.139589   24235 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 19:51:28.139607   24235 main.go:141] libmachine: Making call to close driver server
I1216 19:51:28.139617   24235 main.go:141] libmachine: (functional-782219) Calling .Close
I1216 19:51:28.139818   24235 main.go:141] libmachine: (functional-782219) DBG | Closing plugin on server side
I1216 19:51:28.139879   24235 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:51:28.139913   24235 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-782219 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | latest             | 66f8bdd3810c9 | 196MB  |
| localhost/kicbase/echo-server           | functional-782219  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/kube-apiserver          | v1.32.0            | c2e17b8d0f4a3 | 98.1MB |
| registry.k8s.io/kube-proxy              | v1.32.0            | 040f9f8aac8cd | 95.3MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/kindest/kindnetd              | v20241108-5c6d2daf | 50415e5d05f05 | 95MB   |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.16-0           | a9e7e6b294baf | 151MB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/kube-controller-manager | v1.32.0            | 8cab3d2a8bd0f | 90.8MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-782219  | fa132aa0ba768 | 3.33kB |
| registry.k8s.io/kube-scheduler          | v1.32.0            | a389e107f4ff1 | 70.6MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-782219 image ls --format table --alsologtostderr:
I1216 19:51:29.960340   24391 out.go:345] Setting OutFile to fd 1 ...
I1216 19:51:29.960442   24391 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 19:51:29.960450   24391 out.go:358] Setting ErrFile to fd 2...
I1216 19:51:29.960454   24391 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 19:51:29.960659   24391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-7083/.minikube/bin
I1216 19:51:29.961314   24391 config.go:182] Loaded profile config "functional-782219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I1216 19:51:29.961415   24391 config.go:182] Loaded profile config "functional-782219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I1216 19:51:29.961765   24391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:51:29.961804   24391 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:51:29.977116   24391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41333
I1216 19:51:29.977657   24391 main.go:141] libmachine: () Calling .GetVersion
I1216 19:51:29.978214   24391 main.go:141] libmachine: Using API Version  1
I1216 19:51:29.978237   24391 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:51:29.978769   24391 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:51:29.978994   24391 main.go:141] libmachine: (functional-782219) Calling .GetState
I1216 19:51:29.981118   24391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:51:29.981174   24391 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:51:29.997055   24391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42747
I1216 19:51:29.997566   24391 main.go:141] libmachine: () Calling .GetVersion
I1216 19:51:29.998066   24391 main.go:141] libmachine: Using API Version  1
I1216 19:51:29.998092   24391 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:51:29.998395   24391 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:51:29.998596   24391 main.go:141] libmachine: (functional-782219) Calling .DriverName
I1216 19:51:29.998797   24391 ssh_runner.go:195] Run: systemctl --version
I1216 19:51:29.998824   24391 main.go:141] libmachine: (functional-782219) Calling .GetSSHHostname
I1216 19:51:30.002231   24391 main.go:141] libmachine: (functional-782219) DBG | domain functional-782219 has defined MAC address 52:54:00:c1:1e:fc in network mk-functional-782219
I1216 19:51:30.002795   24391 main.go:141] libmachine: (functional-782219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:1e:fc", ip: ""} in network mk-functional-782219: {Iface:virbr1 ExpiryTime:2024-12-16 20:43:02 +0000 UTC Type:0 Mac:52:54:00:c1:1e:fc Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:functional-782219 Clientid:01:52:54:00:c1:1e:fc}
I1216 19:51:30.002833   24391 main.go:141] libmachine: (functional-782219) DBG | domain functional-782219 has defined IP address 192.168.39.175 and MAC address 52:54:00:c1:1e:fc in network mk-functional-782219
I1216 19:51:30.003007   24391 main.go:141] libmachine: (functional-782219) Calling .GetSSHPort
I1216 19:51:30.003211   24391 main.go:141] libmachine: (functional-782219) Calling .GetSSHKeyPath
I1216 19:51:30.003383   24391 main.go:141] libmachine: (functional-782219) Calling .GetSSHUsername
I1216 19:51:30.003517   24391 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/functional-782219/id_rsa Username:docker}
I1216 19:51:30.090136   24391 ssh_runner.go:195] Run: sudo crictl images --output json
I1216 19:51:30.136382   24391 main.go:141] libmachine: Making call to close driver server
I1216 19:51:30.136401   24391 main.go:141] libmachine: (functional-782219) Calling .Close
I1216 19:51:30.136747   24391 main.go:141] libmachine: (functional-782219) DBG | Closing plugin on server side
I1216 19:51:30.136776   24391 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:51:30.136793   24391 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 19:51:30.136811   24391 main.go:141] libmachine: Making call to close driver server
I1216 19:51:30.136825   24391 main.go:141] libmachine: (functional-782219) Calling .Close
I1216 19:51:30.137086   24391 main.go:141] libmachine: (functional-782219) DBG | Closing plugin on server side
I1216 19:51:30.137131   24391 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:51:30.137150   24391 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 image ls --format json --alsologtostderr
2024/12/16 19:51:29 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-782219 image ls --format json --alsologtostderr:
[{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"fa132aa0ba76891a3515053c8602aa1ceb53e06231035b8dde7aa6956c9d70d3","repoDigests":["localhost/minikube-local-cache-test@sha256:85f76684041a931b4622ec8a196423bde8f6e352e32a5a39a9d130b50dc61d07"],"repoTags":["localhost/minikube-local-cache-test:functional-782219"],"size":"3328"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187
d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"66f8bdd3810c96dc5c28aec39583af731b34a2cd99471530f53c8794ed5b423e","repoDigests":["docker.io/library/nginx@sha256:3d696e83
57051647b844d8c7cf4a0aa71e84379999a4f6af9b8ca1f7919ade42","docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be"],"repoTags":["docker.io/library/nginx:latest"],"size":"195919252"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8
s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08","repoDigests":["registry.k8s.io/kube-proxy@sha256:6aee00d0c7f4869144d1bdbbed7572cd55fd1a4d58fef5a21f53836054cb39b4","registry.k8s.io/kube-proxy@sha256:8db2ca0e784c2188157f005aac67afbbb70d3d68747eea23765bef83917a5a31"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.0"],"size":"95270297"},{"id":"a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1ce9d9222572dc72760ba18589a048b3cf32163dac0708522f3b991974fafdec","registry.k8s.io/kube-scheduler@sha256:84c998f7610b356a5eed24f801c01b273cf3e83f081f25c9b16aa8136c2cafb1"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.0"],"size":"70649156"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247
077"},{"id":"50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e","repoDigests":["docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3","docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"94963761"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-782219
"],"size":"4943877"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"151021823"},{"id":"c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4","repoDigests":["registry.k8s.io/kube-apiserver@sha256:ebc0ce2d7e647dd97980ec338ad81496c111741ab4ad05e7c5d37539aaf7dc3b","registry.k8s.io/kube-apiserver@sha256:fe1eb8fc870b01f4b1f470d2b179a1d1a86d6e2fa174bd10c01b
f45bc5b03200"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.0"],"size":"98051552"},{"id":"8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:0feb9730f9de32b0b1c5cc0eb756c1f4abf2246f1ac8d3fe75285bfee282d0ac","registry.k8s.io/kube-controller-manager@sha256:c8faedf1a5f3981ffade770c696b676d30613681a95be3287c1f7eec50e49b6d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.0"],"size":"90789190"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-782219 image ls --format json --alsologtostderr:
I1216 19:51:29.707190   24349 out.go:345] Setting OutFile to fd 1 ...
I1216 19:51:29.707337   24349 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 19:51:29.707353   24349 out.go:358] Setting ErrFile to fd 2...
I1216 19:51:29.707360   24349 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 19:51:29.707681   24349 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-7083/.minikube/bin
I1216 19:51:29.708368   24349 config.go:182] Loaded profile config "functional-782219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I1216 19:51:29.708502   24349 config.go:182] Loaded profile config "functional-782219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I1216 19:51:29.708852   24349 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:51:29.708889   24349 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:51:29.725318   24349 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42969
I1216 19:51:29.725861   24349 main.go:141] libmachine: () Calling .GetVersion
I1216 19:51:29.726441   24349 main.go:141] libmachine: Using API Version  1
I1216 19:51:29.726458   24349 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:51:29.726817   24349 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:51:29.727064   24349 main.go:141] libmachine: (functional-782219) Calling .GetState
I1216 19:51:29.728945   24349 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:51:29.728987   24349 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:51:29.747592   24349 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35945
I1216 19:51:29.748080   24349 main.go:141] libmachine: () Calling .GetVersion
I1216 19:51:29.748656   24349 main.go:141] libmachine: Using API Version  1
I1216 19:51:29.748689   24349 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:51:29.749054   24349 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:51:29.749233   24349 main.go:141] libmachine: (functional-782219) Calling .DriverName
I1216 19:51:29.749465   24349 ssh_runner.go:195] Run: systemctl --version
I1216 19:51:29.749489   24349 main.go:141] libmachine: (functional-782219) Calling .GetSSHHostname
I1216 19:51:29.752982   24349 main.go:141] libmachine: (functional-782219) DBG | domain functional-782219 has defined MAC address 52:54:00:c1:1e:fc in network mk-functional-782219
I1216 19:51:29.753458   24349 main.go:141] libmachine: (functional-782219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:1e:fc", ip: ""} in network mk-functional-782219: {Iface:virbr1 ExpiryTime:2024-12-16 20:43:02 +0000 UTC Type:0 Mac:52:54:00:c1:1e:fc Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:functional-782219 Clientid:01:52:54:00:c1:1e:fc}
I1216 19:51:29.753518   24349 main.go:141] libmachine: (functional-782219) DBG | domain functional-782219 has defined IP address 192.168.39.175 and MAC address 52:54:00:c1:1e:fc in network mk-functional-782219
I1216 19:51:29.753663   24349 main.go:141] libmachine: (functional-782219) Calling .GetSSHPort
I1216 19:51:29.753886   24349 main.go:141] libmachine: (functional-782219) Calling .GetSSHKeyPath
I1216 19:51:29.754090   24349 main.go:141] libmachine: (functional-782219) Calling .GetSSHUsername
I1216 19:51:29.754251   24349 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/functional-782219/id_rsa Username:docker}
I1216 19:51:29.857666   24349 ssh_runner.go:195] Run: sudo crictl images --output json
I1216 19:51:29.906802   24349 main.go:141] libmachine: Making call to close driver server
I1216 19:51:29.906820   24349 main.go:141] libmachine: (functional-782219) Calling .Close
I1216 19:51:29.907143   24349 main.go:141] libmachine: (functional-782219) DBG | Closing plugin on server side
I1216 19:51:29.907209   24349 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:51:29.907222   24349 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 19:51:29.907237   24349 main.go:141] libmachine: Making call to close driver server
I1216 19:51:29.907260   24349 main.go:141] libmachine: (functional-782219) Calling .Close
I1216 19:51:29.907533   24349 main.go:141] libmachine: (functional-782219) DBG | Closing plugin on server side
I1216 19:51:29.907570   24349 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:51:29.907588   24349 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-782219 image ls --format yaml --alsologtostderr:
- id: 50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e
repoDigests:
- docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "94963761"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-782219
size: "4943877"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "151021823"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: fa132aa0ba76891a3515053c8602aa1ceb53e06231035b8dde7aa6956c9d70d3
repoDigests:
- localhost/minikube-local-cache-test@sha256:85f76684041a931b4622ec8a196423bde8f6e352e32a5a39a9d130b50dc61d07
repoTags:
- localhost/minikube-local-cache-test:functional-782219
size: "3328"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1ce9d9222572dc72760ba18589a048b3cf32163dac0708522f3b991974fafdec
- registry.k8s.io/kube-scheduler@sha256:84c998f7610b356a5eed24f801c01b273cf3e83f081f25c9b16aa8136c2cafb1
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.0
size: "70649156"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:0feb9730f9de32b0b1c5cc0eb756c1f4abf2246f1ac8d3fe75285bfee282d0ac
- registry.k8s.io/kube-controller-manager@sha256:c8faedf1a5f3981ffade770c696b676d30613681a95be3287c1f7eec50e49b6d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.0
size: "90789190"
- id: 040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08
repoDigests:
- registry.k8s.io/kube-proxy@sha256:6aee00d0c7f4869144d1bdbbed7572cd55fd1a4d58fef5a21f53836054cb39b4
- registry.k8s.io/kube-proxy@sha256:8db2ca0e784c2188157f005aac67afbbb70d3d68747eea23765bef83917a5a31
repoTags:
- registry.k8s.io/kube-proxy:v1.32.0
size: "95270297"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 66f8bdd3810c96dc5c28aec39583af731b34a2cd99471530f53c8794ed5b423e
repoDigests:
- docker.io/library/nginx@sha256:3d696e8357051647b844d8c7cf4a0aa71e84379999a4f6af9b8ca1f7919ade42
- docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be
repoTags:
- docker.io/library/nginx:latest
size: "195919252"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:ebc0ce2d7e647dd97980ec338ad81496c111741ab4ad05e7c5d37539aaf7dc3b
- registry.k8s.io/kube-apiserver@sha256:fe1eb8fc870b01f4b1f470d2b179a1d1a86d6e2fa174bd10c01bf45bc5b03200
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.0
size: "98051552"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-782219 image ls --format yaml --alsologtostderr:
I1216 19:51:28.189267   24259 out.go:345] Setting OutFile to fd 1 ...
I1216 19:51:28.189376   24259 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 19:51:28.189386   24259 out.go:358] Setting ErrFile to fd 2...
I1216 19:51:28.189391   24259 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 19:51:28.189576   24259 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-7083/.minikube/bin
I1216 19:51:28.190183   24259 config.go:182] Loaded profile config "functional-782219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I1216 19:51:28.190281   24259 config.go:182] Loaded profile config "functional-782219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I1216 19:51:28.190611   24259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:51:28.190657   24259 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:51:28.207929   24259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34229
I1216 19:51:28.208578   24259 main.go:141] libmachine: () Calling .GetVersion
I1216 19:51:28.209250   24259 main.go:141] libmachine: Using API Version  1
I1216 19:51:28.209276   24259 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:51:28.209603   24259 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:51:28.209793   24259 main.go:141] libmachine: (functional-782219) Calling .GetState
I1216 19:51:28.211617   24259 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:51:28.211664   24259 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:51:28.226904   24259 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39191
I1216 19:51:28.227427   24259 main.go:141] libmachine: () Calling .GetVersion
I1216 19:51:28.228035   24259 main.go:141] libmachine: Using API Version  1
I1216 19:51:28.228069   24259 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:51:28.228483   24259 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:51:28.228663   24259 main.go:141] libmachine: (functional-782219) Calling .DriverName
I1216 19:51:28.228850   24259 ssh_runner.go:195] Run: systemctl --version
I1216 19:51:28.228878   24259 main.go:141] libmachine: (functional-782219) Calling .GetSSHHostname
I1216 19:51:28.232177   24259 main.go:141] libmachine: (functional-782219) DBG | domain functional-782219 has defined MAC address 52:54:00:c1:1e:fc in network mk-functional-782219
I1216 19:51:28.232634   24259 main.go:141] libmachine: (functional-782219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:1e:fc", ip: ""} in network mk-functional-782219: {Iface:virbr1 ExpiryTime:2024-12-16 20:43:02 +0000 UTC Type:0 Mac:52:54:00:c1:1e:fc Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:functional-782219 Clientid:01:52:54:00:c1:1e:fc}
I1216 19:51:28.232667   24259 main.go:141] libmachine: (functional-782219) DBG | domain functional-782219 has defined IP address 192.168.39.175 and MAC address 52:54:00:c1:1e:fc in network mk-functional-782219
I1216 19:51:28.232799   24259 main.go:141] libmachine: (functional-782219) Calling .GetSSHPort
I1216 19:51:28.233001   24259 main.go:141] libmachine: (functional-782219) Calling .GetSSHKeyPath
I1216 19:51:28.233177   24259 main.go:141] libmachine: (functional-782219) Calling .GetSSHUsername
I1216 19:51:28.233394   24259 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/functional-782219/id_rsa Username:docker}
I1216 19:51:28.322704   24259 ssh_runner.go:195] Run: sudo crictl images --output json
I1216 19:51:28.395688   24259 main.go:141] libmachine: Making call to close driver server
I1216 19:51:28.395702   24259 main.go:141] libmachine: (functional-782219) Calling .Close
I1216 19:51:28.395986   24259 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:51:28.396006   24259 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 19:51:28.396032   24259 main.go:141] libmachine: Making call to close driver server
I1216 19:51:28.396043   24259 main.go:141] libmachine: (functional-782219) Calling .Close
I1216 19:51:28.396007   24259 main.go:141] libmachine: (functional-782219) DBG | Closing plugin on server side
I1216 19:51:28.396266   24259 main.go:141] libmachine: (functional-782219) DBG | Closing plugin on server side
I1216 19:51:28.396277   24259 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:51:28.396289   24259 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-782219 ssh pgrep buildkitd: exit status 1 (220.721719ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 image build -t localhost/my-image:functional-782219 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-782219 image build -t localhost/my-image:functional-782219 testdata/build --alsologtostderr: (2.784440774s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-782219 image build -t localhost/my-image:functional-782219 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 1c6ae82aa4b
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-782219
--> f2f40991e9f
Successfully tagged localhost/my-image:functional-782219
f2f40991e9ffc08b9aef13f0dcce7325f853eb42d0042d7c835fae4844ca8f41
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-782219 image build -t localhost/my-image:functional-782219 testdata/build --alsologtostderr:
I1216 19:51:28.676302   24314 out.go:345] Setting OutFile to fd 1 ...
I1216 19:51:28.676491   24314 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 19:51:28.676502   24314 out.go:358] Setting ErrFile to fd 2...
I1216 19:51:28.676510   24314 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 19:51:28.676791   24314 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-7083/.minikube/bin
I1216 19:51:28.677649   24314 config.go:182] Loaded profile config "functional-782219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I1216 19:51:28.678343   24314 config.go:182] Loaded profile config "functional-782219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I1216 19:51:28.678756   24314 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:51:28.678808   24314 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:51:28.693944   24314 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33233
I1216 19:51:28.694392   24314 main.go:141] libmachine: () Calling .GetVersion
I1216 19:51:28.694981   24314 main.go:141] libmachine: Using API Version  1
I1216 19:51:28.695010   24314 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:51:28.695409   24314 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:51:28.695600   24314 main.go:141] libmachine: (functional-782219) Calling .GetState
I1216 19:51:28.697407   24314 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:51:28.697445   24314 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:51:28.712271   24314 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34743
I1216 19:51:28.712742   24314 main.go:141] libmachine: () Calling .GetVersion
I1216 19:51:28.713313   24314 main.go:141] libmachine: Using API Version  1
I1216 19:51:28.713338   24314 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:51:28.713630   24314 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:51:28.713825   24314 main.go:141] libmachine: (functional-782219) Calling .DriverName
I1216 19:51:28.713993   24314 ssh_runner.go:195] Run: systemctl --version
I1216 19:51:28.714017   24314 main.go:141] libmachine: (functional-782219) Calling .GetSSHHostname
I1216 19:51:28.717313   24314 main.go:141] libmachine: (functional-782219) DBG | domain functional-782219 has defined MAC address 52:54:00:c1:1e:fc in network mk-functional-782219
I1216 19:51:28.717728   24314 main.go:141] libmachine: (functional-782219) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c1:1e:fc", ip: ""} in network mk-functional-782219: {Iface:virbr1 ExpiryTime:2024-12-16 20:43:02 +0000 UTC Type:0 Mac:52:54:00:c1:1e:fc Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:functional-782219 Clientid:01:52:54:00:c1:1e:fc}
I1216 19:51:28.717765   24314 main.go:141] libmachine: (functional-782219) DBG | domain functional-782219 has defined IP address 192.168.39.175 and MAC address 52:54:00:c1:1e:fc in network mk-functional-782219
I1216 19:51:28.717908   24314 main.go:141] libmachine: (functional-782219) Calling .GetSSHPort
I1216 19:51:28.718083   24314 main.go:141] libmachine: (functional-782219) Calling .GetSSHKeyPath
I1216 19:51:28.718264   24314 main.go:141] libmachine: (functional-782219) Calling .GetSSHUsername
I1216 19:51:28.718431   24314 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/functional-782219/id_rsa Username:docker}
I1216 19:51:28.826171   24314 build_images.go:161] Building image from path: /tmp/build.1860843143.tar
I1216 19:51:28.826252   24314 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1216 19:51:28.850886   24314 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1860843143.tar
I1216 19:51:28.864149   24314 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1860843143.tar: stat -c "%s %y" /var/lib/minikube/build/build.1860843143.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1860843143.tar': No such file or directory
I1216 19:51:28.864191   24314 ssh_runner.go:362] scp /tmp/build.1860843143.tar --> /var/lib/minikube/build/build.1860843143.tar (3072 bytes)
I1216 19:51:29.035006   24314 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1860843143
I1216 19:51:29.048105   24314 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1860843143 -xf /var/lib/minikube/build/build.1860843143.tar
I1216 19:51:29.061675   24314 crio.go:315] Building image: /var/lib/minikube/build/build.1860843143
I1216 19:51:29.061741   24314 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-782219 /var/lib/minikube/build/build.1860843143 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1216 19:51:31.374001   24314 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-782219 /var/lib/minikube/build/build.1860843143 --cgroup-manager=cgroupfs: (2.312231335s)
I1216 19:51:31.374128   24314 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1860843143
I1216 19:51:31.386168   24314 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1860843143.tar
I1216 19:51:31.401970   24314 build_images.go:217] Built localhost/my-image:functional-782219 from /tmp/build.1860843143.tar
I1216 19:51:31.402009   24314 build_images.go:133] succeeded building to: functional-782219
I1216 19:51:31.402013   24314 build_images.go:134] failed building to: 
I1216 19:51:31.402066   24314 main.go:141] libmachine: Making call to close driver server
I1216 19:51:31.402075   24314 main.go:141] libmachine: (functional-782219) Calling .Close
I1216 19:51:31.402362   24314 main.go:141] libmachine: (functional-782219) DBG | Closing plugin on server side
I1216 19:51:31.402375   24314 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:51:31.402385   24314 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 19:51:31.402395   24314 main.go:141] libmachine: Making call to close driver server
I1216 19:51:31.402400   24314 main.go:141] libmachine: (functional-782219) Calling .Close
I1216 19:51:31.402620   24314 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:51:31.402637   24314 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-782219
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "318.366493ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "48.584055ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "319.647948ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "55.732868ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 image load --daemon kicbase/echo-server:functional-782219 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-782219 image load --daemon kicbase/echo-server:functional-782219 --alsologtostderr: (1.804701128s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.78s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-782219 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-782219 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-9jmbp" [27aa10e6-f2b7-48e8-b74f-8005a4bae460] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-9jmbp" [27aa10e6-f2b7-48e8-b74f-8005a4bae460] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.061726876s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 image load --daemon kicbase/echo-server:functional-782219 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p functional-782219 image load --daemon kicbase/echo-server:functional-782219 --alsologtostderr: (1.115137103s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-782219
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 image load --daemon kicbase/echo-server:functional-782219 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 image save kicbase/echo-server:functional-782219 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 image rm kicbase/echo-server:functional-782219 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-782219
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 image save --daemon kicbase/echo-server:functional-782219 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-782219
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 service list -o json
functional_test.go:1494: Took "523.710925ms" to run "out/minikube-linux-amd64 -p functional-782219 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.175:32283
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.175:32283
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (20.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-782219 /tmp/TestFunctionalparallelMountCmdany-port1226815840/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1734378663062448427" to /tmp/TestFunctionalparallelMountCmdany-port1226815840/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1734378663062448427" to /tmp/TestFunctionalparallelMountCmdany-port1226815840/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1734378663062448427" to /tmp/TestFunctionalparallelMountCmdany-port1226815840/001/test-1734378663062448427
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-782219 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (253.216214ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 19:51:03.316036   14254 retry.go:31] will retry after 577.166601ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 16 19:51 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 16 19:51 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 16 19:51 test-1734378663062448427
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 ssh cat /mount-9p/test-1734378663062448427
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-782219 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [8bb3a251-0b54-4ba8-8b4a-f41e04fba430] Pending
helpers_test.go:344: "busybox-mount" [8bb3a251-0b54-4ba8-8b4a-f41e04fba430] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [8bb3a251-0b54-4ba8-8b4a-f41e04fba430] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [8bb3a251-0b54-4ba8-8b4a-f41e04fba430] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 18.003363362s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-782219 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-782219 /tmp/TestFunctionalparallelMountCmdany-port1226815840/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (20.77s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-782219 /tmp/TestFunctionalparallelMountCmdspecific-port1168793232/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-782219 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (220.782514ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 19:51:24.056750   14254 retry.go:31] will retry after 315.208608ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-782219 /tmp/TestFunctionalparallelMountCmdspecific-port1168793232/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-782219 ssh "sudo umount -f /mount-9p": exit status 1 (305.766671ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-782219 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-782219 /tmp/TestFunctionalparallelMountCmdspecific-port1168793232/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-782219 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4285235501/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-782219 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4285235501/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-782219 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4285235501/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-782219 ssh "findmnt -T" /mount1: exit status 1 (278.333399ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 19:51:25.931676   14254 retry.go:31] will retry after 482.410855ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-782219 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-782219 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-782219 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4285235501/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-782219 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4285235501/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-782219 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4285235501/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.68s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-782219
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-782219
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-782219
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (207.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-057998 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1216 19:52:13.884776   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-057998 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m27.295772965s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (207.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-057998 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-057998 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-057998 -- rollout status deployment/busybox: (3.188415677s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-057998 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-057998 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-057998 -- exec busybox-58667487b6-tdh5m -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-057998 -- exec busybox-58667487b6-tqnn8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-057998 -- exec busybox-58667487b6-zgpnt -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-057998 -- exec busybox-58667487b6-tdh5m -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-057998 -- exec busybox-58667487b6-tqnn8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-057998 -- exec busybox-58667487b6-zgpnt -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-057998 -- exec busybox-58667487b6-tdh5m -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-057998 -- exec busybox-58667487b6-tqnn8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-057998 -- exec busybox-58667487b6-zgpnt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-057998 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-057998 -- exec busybox-58667487b6-tdh5m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-057998 -- exec busybox-58667487b6-tdh5m -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-057998 -- exec busybox-58667487b6-tqnn8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-057998 -- exec busybox-58667487b6-tqnn8 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-057998 -- exec busybox-58667487b6-zgpnt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-057998 -- exec busybox-58667487b6-zgpnt -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-057998 -v=7 --alsologtostderr
E1216 19:55:50.481552   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/functional-782219/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:55:50.488097   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/functional-782219/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:55:50.499636   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/functional-782219/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:55:50.521095   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/functional-782219/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:55:50.562800   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/functional-782219/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:55:50.644253   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/functional-782219/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:55:50.805826   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/functional-782219/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:55:51.128014   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/functional-782219/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:55:51.770219   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/functional-782219/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:55:53.052371   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/functional-782219/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:55:55.614198   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/functional-782219/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:56:00.735698   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/functional-782219/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-057998 -v=7 --alsologtostderr: (55.791373272s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-057998 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 cp testdata/cp-test.txt ha-057998:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 ssh -n ha-057998 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 cp ha-057998:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile817183063/001/cp-test_ha-057998.txt
E1216 19:56:10.977883   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/functional-782219/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 ssh -n ha-057998 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 cp ha-057998:/home/docker/cp-test.txt ha-057998-m02:/home/docker/cp-test_ha-057998_ha-057998-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 ssh -n ha-057998 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 ssh -n ha-057998-m02 "sudo cat /home/docker/cp-test_ha-057998_ha-057998-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 cp ha-057998:/home/docker/cp-test.txt ha-057998-m03:/home/docker/cp-test_ha-057998_ha-057998-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 ssh -n ha-057998 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 ssh -n ha-057998-m03 "sudo cat /home/docker/cp-test_ha-057998_ha-057998-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 cp ha-057998:/home/docker/cp-test.txt ha-057998-m04:/home/docker/cp-test_ha-057998_ha-057998-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 ssh -n ha-057998 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 ssh -n ha-057998-m04 "sudo cat /home/docker/cp-test_ha-057998_ha-057998-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 cp testdata/cp-test.txt ha-057998-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 ssh -n ha-057998-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 cp ha-057998-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile817183063/001/cp-test_ha-057998-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 ssh -n ha-057998-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 cp ha-057998-m02:/home/docker/cp-test.txt ha-057998:/home/docker/cp-test_ha-057998-m02_ha-057998.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 ssh -n ha-057998-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 ssh -n ha-057998 "sudo cat /home/docker/cp-test_ha-057998-m02_ha-057998.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 cp ha-057998-m02:/home/docker/cp-test.txt ha-057998-m03:/home/docker/cp-test_ha-057998-m02_ha-057998-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 ssh -n ha-057998-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 ssh -n ha-057998-m03 "sudo cat /home/docker/cp-test_ha-057998-m02_ha-057998-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 cp ha-057998-m02:/home/docker/cp-test.txt ha-057998-m04:/home/docker/cp-test_ha-057998-m02_ha-057998-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 ssh -n ha-057998-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 ssh -n ha-057998-m04 "sudo cat /home/docker/cp-test_ha-057998-m02_ha-057998-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 cp testdata/cp-test.txt ha-057998-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 ssh -n ha-057998-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 cp ha-057998-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile817183063/001/cp-test_ha-057998-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 ssh -n ha-057998-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 cp ha-057998-m03:/home/docker/cp-test.txt ha-057998:/home/docker/cp-test_ha-057998-m03_ha-057998.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 ssh -n ha-057998-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 ssh -n ha-057998 "sudo cat /home/docker/cp-test_ha-057998-m03_ha-057998.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 cp ha-057998-m03:/home/docker/cp-test.txt ha-057998-m02:/home/docker/cp-test_ha-057998-m03_ha-057998-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 ssh -n ha-057998-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 ssh -n ha-057998-m02 "sudo cat /home/docker/cp-test_ha-057998-m03_ha-057998-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 cp ha-057998-m03:/home/docker/cp-test.txt ha-057998-m04:/home/docker/cp-test_ha-057998-m03_ha-057998-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 ssh -n ha-057998-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 ssh -n ha-057998-m04 "sudo cat /home/docker/cp-test_ha-057998-m03_ha-057998-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 cp testdata/cp-test.txt ha-057998-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 ssh -n ha-057998-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 cp ha-057998-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile817183063/001/cp-test_ha-057998-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 ssh -n ha-057998-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 cp ha-057998-m04:/home/docker/cp-test.txt ha-057998:/home/docker/cp-test_ha-057998-m04_ha-057998.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 ssh -n ha-057998-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 ssh -n ha-057998 "sudo cat /home/docker/cp-test_ha-057998-m04_ha-057998.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 cp ha-057998-m04:/home/docker/cp-test.txt ha-057998-m02:/home/docker/cp-test_ha-057998-m04_ha-057998-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 ssh -n ha-057998-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 ssh -n ha-057998-m02 "sudo cat /home/docker/cp-test_ha-057998-m04_ha-057998-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 cp ha-057998-m04:/home/docker/cp-test.txt ha-057998-m03:/home/docker/cp-test_ha-057998-m04_ha-057998-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 ssh -n ha-057998-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 ssh -n ha-057998-m03 "sudo cat /home/docker/cp-test_ha-057998-m04_ha-057998-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 node stop m02 -v=7 --alsologtostderr
E1216 19:56:31.459427   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/functional-782219/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:57:12.421510   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/functional-782219/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:57:13.884551   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-057998 node stop m02 -v=7 --alsologtostderr: (1m31.016115726s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-057998 status -v=7 --alsologtostderr: exit status 7 (660.774659ms)

                                                
                                                
-- stdout --
	ha-057998
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-057998-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-057998-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-057998-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 19:57:53.852005   29164 out.go:345] Setting OutFile to fd 1 ...
	I1216 19:57:53.852116   29164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 19:57:53.852125   29164 out.go:358] Setting ErrFile to fd 2...
	I1216 19:57:53.852129   29164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 19:57:53.852320   29164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-7083/.minikube/bin
	I1216 19:57:53.852495   29164 out.go:352] Setting JSON to false
	I1216 19:57:53.852522   29164 mustload.go:65] Loading cluster: ha-057998
	I1216 19:57:53.852567   29164 notify.go:220] Checking for updates...
	I1216 19:57:53.852935   29164 config.go:182] Loaded profile config "ha-057998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 19:57:53.852953   29164 status.go:174] checking status of ha-057998 ...
	I1216 19:57:53.853324   29164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:57:53.853377   29164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:57:53.873887   29164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45763
	I1216 19:57:53.874372   29164 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:57:53.875021   29164 main.go:141] libmachine: Using API Version  1
	I1216 19:57:53.875052   29164 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:57:53.875488   29164 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:57:53.875660   29164 main.go:141] libmachine: (ha-057998) Calling .GetState
	I1216 19:57:53.877444   29164 status.go:371] ha-057998 host status = "Running" (err=<nil>)
	I1216 19:57:53.877460   29164 host.go:66] Checking if "ha-057998" exists ...
	I1216 19:57:53.877758   29164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:57:53.877822   29164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:57:53.892769   29164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42989
	I1216 19:57:53.893283   29164 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:57:53.893801   29164 main.go:141] libmachine: Using API Version  1
	I1216 19:57:53.893818   29164 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:57:53.894176   29164 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:57:53.894352   29164 main.go:141] libmachine: (ha-057998) Calling .GetIP
	I1216 19:57:53.897519   29164 main.go:141] libmachine: (ha-057998) DBG | domain ha-057998 has defined MAC address 52:54:00:b6:50:a2 in network mk-ha-057998
	I1216 19:57:53.897967   29164 main.go:141] libmachine: (ha-057998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:50:a2", ip: ""} in network mk-ha-057998: {Iface:virbr1 ExpiryTime:2024-12-16 20:51:52 +0000 UTC Type:0 Mac:52:54:00:b6:50:a2 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-057998 Clientid:01:52:54:00:b6:50:a2}
	I1216 19:57:53.898002   29164 main.go:141] libmachine: (ha-057998) DBG | domain ha-057998 has defined IP address 192.168.39.192 and MAC address 52:54:00:b6:50:a2 in network mk-ha-057998
	I1216 19:57:53.898170   29164 host.go:66] Checking if "ha-057998" exists ...
	I1216 19:57:53.898573   29164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:57:53.898618   29164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:57:53.913463   29164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35793
	I1216 19:57:53.913818   29164 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:57:53.914252   29164 main.go:141] libmachine: Using API Version  1
	I1216 19:57:53.914275   29164 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:57:53.914574   29164 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:57:53.914771   29164 main.go:141] libmachine: (ha-057998) Calling .DriverName
	I1216 19:57:53.914936   29164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 19:57:53.914963   29164 main.go:141] libmachine: (ha-057998) Calling .GetSSHHostname
	I1216 19:57:53.918051   29164 main.go:141] libmachine: (ha-057998) DBG | domain ha-057998 has defined MAC address 52:54:00:b6:50:a2 in network mk-ha-057998
	I1216 19:57:53.918547   29164 main.go:141] libmachine: (ha-057998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:50:a2", ip: ""} in network mk-ha-057998: {Iface:virbr1 ExpiryTime:2024-12-16 20:51:52 +0000 UTC Type:0 Mac:52:54:00:b6:50:a2 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:ha-057998 Clientid:01:52:54:00:b6:50:a2}
	I1216 19:57:53.918578   29164 main.go:141] libmachine: (ha-057998) DBG | domain ha-057998 has defined IP address 192.168.39.192 and MAC address 52:54:00:b6:50:a2 in network mk-ha-057998
	I1216 19:57:53.918708   29164 main.go:141] libmachine: (ha-057998) Calling .GetSSHPort
	I1216 19:57:53.918893   29164 main.go:141] libmachine: (ha-057998) Calling .GetSSHKeyPath
	I1216 19:57:53.919089   29164 main.go:141] libmachine: (ha-057998) Calling .GetSSHUsername
	I1216 19:57:53.919223   29164 sshutil.go:53] new ssh client: &{IP:192.168.39.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/ha-057998/id_rsa Username:docker}
	I1216 19:57:54.008864   29164 ssh_runner.go:195] Run: systemctl --version
	I1216 19:57:54.019370   29164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 19:57:54.041099   29164 kubeconfig.go:125] found "ha-057998" server: "https://192.168.39.254:8443"
	I1216 19:57:54.041137   29164 api_server.go:166] Checking apiserver status ...
	I1216 19:57:54.041172   29164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 19:57:54.058829   29164 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1116/cgroup
	W1216 19:57:54.071180   29164 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1116/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1216 19:57:54.071257   29164 ssh_runner.go:195] Run: ls
	I1216 19:57:54.077320   29164 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1216 19:57:54.081394   29164 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1216 19:57:54.081426   29164 status.go:463] ha-057998 apiserver status = Running (err=<nil>)
	I1216 19:57:54.081439   29164 status.go:176] ha-057998 status: &{Name:ha-057998 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 19:57:54.081464   29164 status.go:174] checking status of ha-057998-m02 ...
	I1216 19:57:54.081861   29164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:57:54.081897   29164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:57:54.096597   29164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40275
	I1216 19:57:54.097063   29164 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:57:54.097559   29164 main.go:141] libmachine: Using API Version  1
	I1216 19:57:54.097584   29164 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:57:54.097872   29164 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:57:54.098050   29164 main.go:141] libmachine: (ha-057998-m02) Calling .GetState
	I1216 19:57:54.099562   29164 status.go:371] ha-057998-m02 host status = "Stopped" (err=<nil>)
	I1216 19:57:54.099583   29164 status.go:384] host is not running, skipping remaining checks
	I1216 19:57:54.099590   29164 status.go:176] ha-057998-m02 status: &{Name:ha-057998-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 19:57:54.099623   29164 status.go:174] checking status of ha-057998-m03 ...
	I1216 19:57:54.099909   29164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:57:54.099972   29164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:57:54.114822   29164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37091
	I1216 19:57:54.115314   29164 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:57:54.115846   29164 main.go:141] libmachine: Using API Version  1
	I1216 19:57:54.115871   29164 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:57:54.116189   29164 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:57:54.116360   29164 main.go:141] libmachine: (ha-057998-m03) Calling .GetState
	I1216 19:57:54.118131   29164 status.go:371] ha-057998-m03 host status = "Running" (err=<nil>)
	I1216 19:57:54.118149   29164 host.go:66] Checking if "ha-057998-m03" exists ...
	I1216 19:57:54.118436   29164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:57:54.118481   29164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:57:54.133871   29164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32833
	I1216 19:57:54.134328   29164 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:57:54.134871   29164 main.go:141] libmachine: Using API Version  1
	I1216 19:57:54.134895   29164 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:57:54.135229   29164 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:57:54.135440   29164 main.go:141] libmachine: (ha-057998-m03) Calling .GetIP
	I1216 19:57:54.138140   29164 main.go:141] libmachine: (ha-057998-m03) DBG | domain ha-057998-m03 has defined MAC address 52:54:00:59:07:5d in network mk-ha-057998
	I1216 19:57:54.138588   29164 main.go:141] libmachine: (ha-057998-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:07:5d", ip: ""} in network mk-ha-057998: {Iface:virbr1 ExpiryTime:2024-12-16 20:54:02 +0000 UTC Type:0 Mac:52:54:00:59:07:5d Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-057998-m03 Clientid:01:52:54:00:59:07:5d}
	I1216 19:57:54.138626   29164 main.go:141] libmachine: (ha-057998-m03) DBG | domain ha-057998-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:59:07:5d in network mk-ha-057998
	I1216 19:57:54.138732   29164 host.go:66] Checking if "ha-057998-m03" exists ...
	I1216 19:57:54.139063   29164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:57:54.139104   29164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:57:54.155364   29164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39563
	I1216 19:57:54.155909   29164 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:57:54.156488   29164 main.go:141] libmachine: Using API Version  1
	I1216 19:57:54.156516   29164 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:57:54.156849   29164 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:57:54.157010   29164 main.go:141] libmachine: (ha-057998-m03) Calling .DriverName
	I1216 19:57:54.157267   29164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 19:57:54.157290   29164 main.go:141] libmachine: (ha-057998-m03) Calling .GetSSHHostname
	I1216 19:57:54.160562   29164 main.go:141] libmachine: (ha-057998-m03) DBG | domain ha-057998-m03 has defined MAC address 52:54:00:59:07:5d in network mk-ha-057998
	I1216 19:57:54.161065   29164 main.go:141] libmachine: (ha-057998-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:07:5d", ip: ""} in network mk-ha-057998: {Iface:virbr1 ExpiryTime:2024-12-16 20:54:02 +0000 UTC Type:0 Mac:52:54:00:59:07:5d Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:ha-057998-m03 Clientid:01:52:54:00:59:07:5d}
	I1216 19:57:54.161101   29164 main.go:141] libmachine: (ha-057998-m03) DBG | domain ha-057998-m03 has defined IP address 192.168.39.215 and MAC address 52:54:00:59:07:5d in network mk-ha-057998
	I1216 19:57:54.161224   29164 main.go:141] libmachine: (ha-057998-m03) Calling .GetSSHPort
	I1216 19:57:54.161398   29164 main.go:141] libmachine: (ha-057998-m03) Calling .GetSSHKeyPath
	I1216 19:57:54.161562   29164 main.go:141] libmachine: (ha-057998-m03) Calling .GetSSHUsername
	I1216 19:57:54.161742   29164 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/ha-057998-m03/id_rsa Username:docker}
	I1216 19:57:54.245028   29164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 19:57:54.262880   29164 kubeconfig.go:125] found "ha-057998" server: "https://192.168.39.254:8443"
	I1216 19:57:54.262912   29164 api_server.go:166] Checking apiserver status ...
	I1216 19:57:54.262955   29164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 19:57:54.280599   29164 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1476/cgroup
	W1216 19:57:54.292910   29164 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1476/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1216 19:57:54.292993   29164 ssh_runner.go:195] Run: ls
	I1216 19:57:54.297834   29164 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1216 19:57:54.303290   29164 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1216 19:57:54.303325   29164 status.go:463] ha-057998-m03 apiserver status = Running (err=<nil>)
	I1216 19:57:54.303341   29164 status.go:176] ha-057998-m03 status: &{Name:ha-057998-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 19:57:54.303355   29164 status.go:174] checking status of ha-057998-m04 ...
	I1216 19:57:54.303634   29164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:57:54.303675   29164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:57:54.318650   29164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45175
	I1216 19:57:54.319120   29164 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:57:54.319710   29164 main.go:141] libmachine: Using API Version  1
	I1216 19:57:54.319736   29164 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:57:54.320016   29164 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:57:54.320213   29164 main.go:141] libmachine: (ha-057998-m04) Calling .GetState
	I1216 19:57:54.321716   29164 status.go:371] ha-057998-m04 host status = "Running" (err=<nil>)
	I1216 19:57:54.321729   29164 host.go:66] Checking if "ha-057998-m04" exists ...
	I1216 19:57:54.322033   29164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:57:54.322069   29164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:57:54.337599   29164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33525
	I1216 19:57:54.338039   29164 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:57:54.338517   29164 main.go:141] libmachine: Using API Version  1
	I1216 19:57:54.338531   29164 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:57:54.338824   29164 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:57:54.339031   29164 main.go:141] libmachine: (ha-057998-m04) Calling .GetIP
	I1216 19:57:54.341683   29164 main.go:141] libmachine: (ha-057998-m04) DBG | domain ha-057998-m04 has defined MAC address 52:54:00:0d:d5:4c in network mk-ha-057998
	I1216 19:57:54.342132   29164 main.go:141] libmachine: (ha-057998-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:d5:4c", ip: ""} in network mk-ha-057998: {Iface:virbr1 ExpiryTime:2024-12-16 20:55:28 +0000 UTC Type:0 Mac:52:54:00:0d:d5:4c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-057998-m04 Clientid:01:52:54:00:0d:d5:4c}
	I1216 19:57:54.342159   29164 main.go:141] libmachine: (ha-057998-m04) DBG | domain ha-057998-m04 has defined IP address 192.168.39.92 and MAC address 52:54:00:0d:d5:4c in network mk-ha-057998
	I1216 19:57:54.342301   29164 host.go:66] Checking if "ha-057998-m04" exists ...
	I1216 19:57:54.342575   29164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 19:57:54.342613   29164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 19:57:54.357898   29164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33261
	I1216 19:57:54.358410   29164 main.go:141] libmachine: () Calling .GetVersion
	I1216 19:57:54.358910   29164 main.go:141] libmachine: Using API Version  1
	I1216 19:57:54.358936   29164 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 19:57:54.359282   29164 main.go:141] libmachine: () Calling .GetMachineName
	I1216 19:57:54.359457   29164 main.go:141] libmachine: (ha-057998-m04) Calling .DriverName
	I1216 19:57:54.359631   29164 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 19:57:54.359653   29164 main.go:141] libmachine: (ha-057998-m04) Calling .GetSSHHostname
	I1216 19:57:54.362316   29164 main.go:141] libmachine: (ha-057998-m04) DBG | domain ha-057998-m04 has defined MAC address 52:54:00:0d:d5:4c in network mk-ha-057998
	I1216 19:57:54.362748   29164 main.go:141] libmachine: (ha-057998-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:d5:4c", ip: ""} in network mk-ha-057998: {Iface:virbr1 ExpiryTime:2024-12-16 20:55:28 +0000 UTC Type:0 Mac:52:54:00:0d:d5:4c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-057998-m04 Clientid:01:52:54:00:0d:d5:4c}
	I1216 19:57:54.362773   29164 main.go:141] libmachine: (ha-057998-m04) DBG | domain ha-057998-m04 has defined IP address 192.168.39.92 and MAC address 52:54:00:0d:d5:4c in network mk-ha-057998
	I1216 19:57:54.362972   29164 main.go:141] libmachine: (ha-057998-m04) Calling .GetSSHPort
	I1216 19:57:54.363153   29164 main.go:141] libmachine: (ha-057998-m04) Calling .GetSSHKeyPath
	I1216 19:57:54.363325   29164 main.go:141] libmachine: (ha-057998-m04) Calling .GetSSHUsername
	I1216 19:57:54.363444   29164 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/ha-057998-m04/id_rsa Username:docker}
	I1216 19:57:54.447972   29164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 19:57:54.466173   29164 status.go:176] ha-057998-m04 status: &{Name:ha-057998-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (46.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 node start m02 -v=7 --alsologtostderr
E1216 19:58:34.343535   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/functional-782219/client.crt: no such file or directory" logger="UnhandledError"
E1216 19:58:36.950553   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-057998 node start m02 -v=7 --alsologtostderr: (45.742708627s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (46.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (467.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-057998 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-057998 -v=7 --alsologtostderr
E1216 20:00:50.481835   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/functional-782219/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:01:18.184876   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/functional-782219/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:02:13.883839   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-057998 -v=7 --alsologtostderr: (4m34.381273658s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-057998 --wait=true -v=7 --alsologtostderr
E1216 20:05:50.481995   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/functional-782219/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-057998 --wait=true -v=7 --alsologtostderr: (3m13.266840364s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-057998
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (467.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-057998 node delete m03 -v=7 --alsologtostderr: (17.753292418s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 stop -v=7 --alsologtostderr
E1216 20:07:13.883872   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:10:50.481631   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/functional-782219/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-057998 stop -v=7 --alsologtostderr: (4m32.665337546s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-057998 status -v=7 --alsologtostderr: exit status 7 (110.074463ms)

                                                
                                                
-- stdout --
	ha-057998
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-057998-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-057998-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 20:11:22.309643   33510 out.go:345] Setting OutFile to fd 1 ...
	I1216 20:11:22.309927   33510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:11:22.309937   33510 out.go:358] Setting ErrFile to fd 2...
	I1216 20:11:22.309942   33510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:11:22.310135   33510 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-7083/.minikube/bin
	I1216 20:11:22.310297   33510 out.go:352] Setting JSON to false
	I1216 20:11:22.310323   33510 mustload.go:65] Loading cluster: ha-057998
	I1216 20:11:22.310454   33510 notify.go:220] Checking for updates...
	I1216 20:11:22.310891   33510 config.go:182] Loaded profile config "ha-057998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 20:11:22.310920   33510 status.go:174] checking status of ha-057998 ...
	I1216 20:11:22.311525   33510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 20:11:22.311572   33510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:11:22.333131   33510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33521
	I1216 20:11:22.333670   33510 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:11:22.334405   33510 main.go:141] libmachine: Using API Version  1
	I1216 20:11:22.334433   33510 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:11:22.334894   33510 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:11:22.335135   33510 main.go:141] libmachine: (ha-057998) Calling .GetState
	I1216 20:11:22.336920   33510 status.go:371] ha-057998 host status = "Stopped" (err=<nil>)
	I1216 20:11:22.336950   33510 status.go:384] host is not running, skipping remaining checks
	I1216 20:11:22.336960   33510 status.go:176] ha-057998 status: &{Name:ha-057998 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 20:11:22.336990   33510 status.go:174] checking status of ha-057998-m02 ...
	I1216 20:11:22.337308   33510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 20:11:22.337354   33510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:11:22.352199   33510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40909
	I1216 20:11:22.352620   33510 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:11:22.353119   33510 main.go:141] libmachine: Using API Version  1
	I1216 20:11:22.353157   33510 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:11:22.353475   33510 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:11:22.353634   33510 main.go:141] libmachine: (ha-057998-m02) Calling .GetState
	I1216 20:11:22.355130   33510 status.go:371] ha-057998-m02 host status = "Stopped" (err=<nil>)
	I1216 20:11:22.355146   33510 status.go:384] host is not running, skipping remaining checks
	I1216 20:11:22.355153   33510 status.go:176] ha-057998-m02 status: &{Name:ha-057998-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 20:11:22.355173   33510 status.go:174] checking status of ha-057998-m04 ...
	I1216 20:11:22.355540   33510 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 20:11:22.355582   33510 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:11:22.370262   33510 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41297
	I1216 20:11:22.370702   33510 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:11:22.371130   33510 main.go:141] libmachine: Using API Version  1
	I1216 20:11:22.371154   33510 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:11:22.371488   33510 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:11:22.371670   33510 main.go:141] libmachine: (ha-057998-m04) Calling .GetState
	I1216 20:11:22.373499   33510 status.go:371] ha-057998-m04 host status = "Stopped" (err=<nil>)
	I1216 20:11:22.373511   33510 status.go:384] host is not running, skipping remaining checks
	I1216 20:11:22.373516   33510 status.go:176] ha-057998-m04 status: &{Name:ha-057998-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (120.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-057998 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1216 20:12:13.547176   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/functional-782219/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:12:13.883932   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-057998 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m59.521380886s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (120.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (77.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-057998 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-057998 --control-plane -v=7 --alsologtostderr: (1m17.102131529s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-057998 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (77.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                    
x
+
TestJSONOutput/start/Command (81.68s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-685695 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E1216 20:15:16.952264   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:15:50.486556   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/functional-782219/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-685695 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m21.684082694s)
--- PASS: TestJSONOutput/start/Command (81.68s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-685695 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-685695 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (23.4s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-685695 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-685695 --output=json --user=testUser: (23.400941154s)
--- PASS: TestJSONOutput/stop/Command (23.40s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-906980 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-906980 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (67.540547ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"da08af5d-6684-48cf-a33a-5fd11968887f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-906980] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"611ffba4-4c9b-4a31-bc4e-e4ab382eebc8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20091"}}
	{"specversion":"1.0","id":"1929f8c7-1d4e-4a9d-8a36-9183f729bfeb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fa848a24-dcab-4379-97be-e47d7a0d4dca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20091-7083/kubeconfig"}}
	{"specversion":"1.0","id":"be42f77f-b281-4d7c-893e-5560ff6240e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-7083/.minikube"}}
	{"specversion":"1.0","id":"ab33f461-bec8-4800-b76a-0a7668835b4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"adb3c545-287e-4e31-a48b-a44c0d926dda","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e7742fac-48bd-4e7c-a532-7006dc9b4655","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-906980" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-906980
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (94.36s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-193409 --driver=kvm2  --container-runtime=crio
E1216 20:17:13.885020   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-193409 --driver=kvm2  --container-runtime=crio: (47.439384981s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-206507 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-206507 --driver=kvm2  --container-runtime=crio: (43.873799919s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-193409
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-206507
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-206507" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-206507
helpers_test.go:175: Cleaning up "first-193409" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-193409
--- PASS: TestMinikubeProfile (94.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.88s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-588105 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-588105 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.878519707s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-588105 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-588105 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.99s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-603921 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-603921 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.985718077s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.99s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-603921 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-603921 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-588105 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-603921 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-603921 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-603921
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-603921: (1.321112396s)
--- PASS: TestMountStart/serial/Stop (1.32s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.29s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-603921
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-603921: (20.287468446s)
--- PASS: TestMountStart/serial/RestartStopped (21.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-603921 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-603921 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (112.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-228964 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1216 20:20:50.481371   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/functional-782219/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-228964 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m52.329854443s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (112.75s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-228964 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-228964 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-228964 -- rollout status deployment/busybox: (2.496842296s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-228964 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-228964 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-228964 -- exec busybox-58667487b6-mdh75 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-228964 -- exec busybox-58667487b6-qnpfm -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-228964 -- exec busybox-58667487b6-mdh75 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-228964 -- exec busybox-58667487b6-qnpfm -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-228964 -- exec busybox-58667487b6-mdh75 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-228964 -- exec busybox-58667487b6-qnpfm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.07s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-228964 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-228964 -- exec busybox-58667487b6-mdh75 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-228964 -- exec busybox-58667487b6-mdh75 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-228964 -- exec busybox-58667487b6-qnpfm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-228964 -- exec busybox-58667487b6-qnpfm -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (52.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-228964 -v 3 --alsologtostderr
E1216 20:22:13.884129   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-228964 -v 3 --alsologtostderr: (51.982325561s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (52.57s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-228964 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.59s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 cp testdata/cp-test.txt multinode-228964:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 ssh -n multinode-228964 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 cp multinode-228964:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1624223668/001/cp-test_multinode-228964.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 ssh -n multinode-228964 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 cp multinode-228964:/home/docker/cp-test.txt multinode-228964-m02:/home/docker/cp-test_multinode-228964_multinode-228964-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 ssh -n multinode-228964 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 ssh -n multinode-228964-m02 "sudo cat /home/docker/cp-test_multinode-228964_multinode-228964-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 cp multinode-228964:/home/docker/cp-test.txt multinode-228964-m03:/home/docker/cp-test_multinode-228964_multinode-228964-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 ssh -n multinode-228964 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 ssh -n multinode-228964-m03 "sudo cat /home/docker/cp-test_multinode-228964_multinode-228964-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 cp testdata/cp-test.txt multinode-228964-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 ssh -n multinode-228964-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 cp multinode-228964-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1624223668/001/cp-test_multinode-228964-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 ssh -n multinode-228964-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 cp multinode-228964-m02:/home/docker/cp-test.txt multinode-228964:/home/docker/cp-test_multinode-228964-m02_multinode-228964.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 ssh -n multinode-228964-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 ssh -n multinode-228964 "sudo cat /home/docker/cp-test_multinode-228964-m02_multinode-228964.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 cp multinode-228964-m02:/home/docker/cp-test.txt multinode-228964-m03:/home/docker/cp-test_multinode-228964-m02_multinode-228964-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 ssh -n multinode-228964-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 ssh -n multinode-228964-m03 "sudo cat /home/docker/cp-test_multinode-228964-m02_multinode-228964-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 cp testdata/cp-test.txt multinode-228964-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 ssh -n multinode-228964-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 cp multinode-228964-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1624223668/001/cp-test_multinode-228964-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 ssh -n multinode-228964-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 cp multinode-228964-m03:/home/docker/cp-test.txt multinode-228964:/home/docker/cp-test_multinode-228964-m03_multinode-228964.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 ssh -n multinode-228964-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 ssh -n multinode-228964 "sudo cat /home/docker/cp-test_multinode-228964-m03_multinode-228964.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 cp multinode-228964-m03:/home/docker/cp-test.txt multinode-228964-m02:/home/docker/cp-test_multinode-228964-m03_multinode-228964-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 ssh -n multinode-228964-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 ssh -n multinode-228964-m02 "sudo cat /home/docker/cp-test_multinode-228964-m03_multinode-228964-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.37s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-228964 node stop m03: (1.485442073s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-228964 status: exit status 7 (431.994849ms)

                                                
                                                
-- stdout --
	multinode-228964
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-228964-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-228964-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-228964 status --alsologtostderr: exit status 7 (449.353456ms)

                                                
                                                
-- stdout --
	multinode-228964
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-228964-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-228964-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 20:22:29.011656   41801 out.go:345] Setting OutFile to fd 1 ...
	I1216 20:22:29.012134   41801 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:22:29.012188   41801 out.go:358] Setting ErrFile to fd 2...
	I1216 20:22:29.012206   41801 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:22:29.012645   41801 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-7083/.minikube/bin
	I1216 20:22:29.012993   41801 out.go:352] Setting JSON to false
	I1216 20:22:29.013077   41801 mustload.go:65] Loading cluster: multinode-228964
	I1216 20:22:29.013180   41801 notify.go:220] Checking for updates...
	I1216 20:22:29.013922   41801 config.go:182] Loaded profile config "multinode-228964": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 20:22:29.013954   41801 status.go:174] checking status of multinode-228964 ...
	I1216 20:22:29.014485   41801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 20:22:29.014526   41801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:22:29.035645   41801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38747
	I1216 20:22:29.036243   41801 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:22:29.036831   41801 main.go:141] libmachine: Using API Version  1
	I1216 20:22:29.036852   41801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:22:29.037199   41801 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:22:29.037420   41801 main.go:141] libmachine: (multinode-228964) Calling .GetState
	I1216 20:22:29.038939   41801 status.go:371] multinode-228964 host status = "Running" (err=<nil>)
	I1216 20:22:29.038958   41801 host.go:66] Checking if "multinode-228964" exists ...
	I1216 20:22:29.039290   41801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 20:22:29.039351   41801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:22:29.054972   41801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45517
	I1216 20:22:29.055429   41801 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:22:29.055880   41801 main.go:141] libmachine: Using API Version  1
	I1216 20:22:29.055902   41801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:22:29.056280   41801 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:22:29.056472   41801 main.go:141] libmachine: (multinode-228964) Calling .GetIP
	I1216 20:22:29.059168   41801 main.go:141] libmachine: (multinode-228964) DBG | domain multinode-228964 has defined MAC address 52:54:00:31:54:ba in network mk-multinode-228964
	I1216 20:22:29.059594   41801 main.go:141] libmachine: (multinode-228964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:54:ba", ip: ""} in network mk-multinode-228964: {Iface:virbr1 ExpiryTime:2024-12-16 21:19:44 +0000 UTC Type:0 Mac:52:54:00:31:54:ba Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:multinode-228964 Clientid:01:52:54:00:31:54:ba}
	I1216 20:22:29.059626   41801 main.go:141] libmachine: (multinode-228964) DBG | domain multinode-228964 has defined IP address 192.168.39.91 and MAC address 52:54:00:31:54:ba in network mk-multinode-228964
	I1216 20:22:29.059801   41801 host.go:66] Checking if "multinode-228964" exists ...
	I1216 20:22:29.060176   41801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 20:22:29.060221   41801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:22:29.075556   41801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36927
	I1216 20:22:29.076011   41801 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:22:29.076500   41801 main.go:141] libmachine: Using API Version  1
	I1216 20:22:29.076525   41801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:22:29.076805   41801 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:22:29.076969   41801 main.go:141] libmachine: (multinode-228964) Calling .DriverName
	I1216 20:22:29.077152   41801 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 20:22:29.077172   41801 main.go:141] libmachine: (multinode-228964) Calling .GetSSHHostname
	I1216 20:22:29.079878   41801 main.go:141] libmachine: (multinode-228964) DBG | domain multinode-228964 has defined MAC address 52:54:00:31:54:ba in network mk-multinode-228964
	I1216 20:22:29.080326   41801 main.go:141] libmachine: (multinode-228964) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:31:54:ba", ip: ""} in network mk-multinode-228964: {Iface:virbr1 ExpiryTime:2024-12-16 21:19:44 +0000 UTC Type:0 Mac:52:54:00:31:54:ba Iaid: IPaddr:192.168.39.91 Prefix:24 Hostname:multinode-228964 Clientid:01:52:54:00:31:54:ba}
	I1216 20:22:29.080359   41801 main.go:141] libmachine: (multinode-228964) DBG | domain multinode-228964 has defined IP address 192.168.39.91 and MAC address 52:54:00:31:54:ba in network mk-multinode-228964
	I1216 20:22:29.080461   41801 main.go:141] libmachine: (multinode-228964) Calling .GetSSHPort
	I1216 20:22:29.080623   41801 main.go:141] libmachine: (multinode-228964) Calling .GetSSHKeyPath
	I1216 20:22:29.080746   41801 main.go:141] libmachine: (multinode-228964) Calling .GetSSHUsername
	I1216 20:22:29.080873   41801 sshutil.go:53] new ssh client: &{IP:192.168.39.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/multinode-228964/id_rsa Username:docker}
	I1216 20:22:29.168583   41801 ssh_runner.go:195] Run: systemctl --version
	I1216 20:22:29.175342   41801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 20:22:29.192221   41801 kubeconfig.go:125] found "multinode-228964" server: "https://192.168.39.91:8443"
	I1216 20:22:29.192277   41801 api_server.go:166] Checking apiserver status ...
	I1216 20:22:29.192322   41801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 20:22:29.207704   41801 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1082/cgroup
	W1216 20:22:29.222541   41801 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1082/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1216 20:22:29.222601   41801 ssh_runner.go:195] Run: ls
	I1216 20:22:29.227784   41801 api_server.go:253] Checking apiserver healthz at https://192.168.39.91:8443/healthz ...
	I1216 20:22:29.232265   41801 api_server.go:279] https://192.168.39.91:8443/healthz returned 200:
	ok
	I1216 20:22:29.232296   41801 status.go:463] multinode-228964 apiserver status = Running (err=<nil>)
	I1216 20:22:29.232314   41801 status.go:176] multinode-228964 status: &{Name:multinode-228964 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 20:22:29.232335   41801 status.go:174] checking status of multinode-228964-m02 ...
	I1216 20:22:29.232614   41801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 20:22:29.232647   41801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:22:29.248206   41801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33005
	I1216 20:22:29.248784   41801 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:22:29.249303   41801 main.go:141] libmachine: Using API Version  1
	I1216 20:22:29.249318   41801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:22:29.249629   41801 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:22:29.249820   41801 main.go:141] libmachine: (multinode-228964-m02) Calling .GetState
	I1216 20:22:29.251332   41801 status.go:371] multinode-228964-m02 host status = "Running" (err=<nil>)
	I1216 20:22:29.251350   41801 host.go:66] Checking if "multinode-228964-m02" exists ...
	I1216 20:22:29.251793   41801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 20:22:29.251841   41801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:22:29.268178   41801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43021
	I1216 20:22:29.268649   41801 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:22:29.269165   41801 main.go:141] libmachine: Using API Version  1
	I1216 20:22:29.269192   41801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:22:29.269569   41801 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:22:29.269771   41801 main.go:141] libmachine: (multinode-228964-m02) Calling .GetIP
	I1216 20:22:29.273040   41801 main.go:141] libmachine: (multinode-228964-m02) DBG | domain multinode-228964-m02 has defined MAC address 52:54:00:40:5d:65 in network mk-multinode-228964
	I1216 20:22:29.273416   41801 main.go:141] libmachine: (multinode-228964-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:5d:65", ip: ""} in network mk-multinode-228964: {Iface:virbr1 ExpiryTime:2024-12-16 21:20:46 +0000 UTC Type:0 Mac:52:54:00:40:5d:65 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-228964-m02 Clientid:01:52:54:00:40:5d:65}
	I1216 20:22:29.273460   41801 main.go:141] libmachine: (multinode-228964-m02) DBG | domain multinode-228964-m02 has defined IP address 192.168.39.24 and MAC address 52:54:00:40:5d:65 in network mk-multinode-228964
	I1216 20:22:29.273624   41801 host.go:66] Checking if "multinode-228964-m02" exists ...
	I1216 20:22:29.273921   41801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 20:22:29.273958   41801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:22:29.290061   41801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42739
	I1216 20:22:29.290529   41801 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:22:29.290967   41801 main.go:141] libmachine: Using API Version  1
	I1216 20:22:29.291003   41801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:22:29.291350   41801 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:22:29.291577   41801 main.go:141] libmachine: (multinode-228964-m02) Calling .DriverName
	I1216 20:22:29.291787   41801 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 20:22:29.291811   41801 main.go:141] libmachine: (multinode-228964-m02) Calling .GetSSHHostname
	I1216 20:22:29.294674   41801 main.go:141] libmachine: (multinode-228964-m02) DBG | domain multinode-228964-m02 has defined MAC address 52:54:00:40:5d:65 in network mk-multinode-228964
	I1216 20:22:29.295090   41801 main.go:141] libmachine: (multinode-228964-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:5d:65", ip: ""} in network mk-multinode-228964: {Iface:virbr1 ExpiryTime:2024-12-16 21:20:46 +0000 UTC Type:0 Mac:52:54:00:40:5d:65 Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:multinode-228964-m02 Clientid:01:52:54:00:40:5d:65}
	I1216 20:22:29.295121   41801 main.go:141] libmachine: (multinode-228964-m02) DBG | domain multinode-228964-m02 has defined IP address 192.168.39.24 and MAC address 52:54:00:40:5d:65 in network mk-multinode-228964
	I1216 20:22:29.295256   41801 main.go:141] libmachine: (multinode-228964-m02) Calling .GetSSHPort
	I1216 20:22:29.295424   41801 main.go:141] libmachine: (multinode-228964-m02) Calling .GetSSHKeyPath
	I1216 20:22:29.295579   41801 main.go:141] libmachine: (multinode-228964-m02) Calling .GetSSHUsername
	I1216 20:22:29.295743   41801 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/multinode-228964-m02/id_rsa Username:docker}
	I1216 20:22:29.379068   41801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 20:22:29.394213   41801 status.go:176] multinode-228964-m02 status: &{Name:multinode-228964-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1216 20:22:29.394260   41801 status.go:174] checking status of multinode-228964-m03 ...
	I1216 20:22:29.394677   41801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 20:22:29.394723   41801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:22:29.410469   41801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36355
	I1216 20:22:29.410960   41801 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:22:29.411603   41801 main.go:141] libmachine: Using API Version  1
	I1216 20:22:29.411635   41801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:22:29.411960   41801 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:22:29.412149   41801 main.go:141] libmachine: (multinode-228964-m03) Calling .GetState
	I1216 20:22:29.413926   41801 status.go:371] multinode-228964-m03 host status = "Stopped" (err=<nil>)
	I1216 20:22:29.413944   41801 status.go:384] host is not running, skipping remaining checks
	I1216 20:22:29.413949   41801 status.go:176] multinode-228964-m03 status: &{Name:multinode-228964-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.37s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-228964 node start m03 -v=7 --alsologtostderr: (38.17578956s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.82s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (421.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-228964
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-228964
E1216 20:25:50.486659   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/functional-782219/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-228964: (3m2.984568924s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-228964 --wait=true -v=8 --alsologtostderr
E1216 20:27:13.885490   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:28:53.549918   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/functional-782219/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-228964 --wait=true -v=8 --alsologtostderr: (3m58.794487629s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-228964
--- PASS: TestMultiNode/serial/RestartKeepsNodes (421.88s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-228964 node delete m03: (1.944986833s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.49s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (181.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 stop
E1216 20:30:50.486776   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/functional-782219/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:31:56.956725   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: no such file or directory" logger="UnhandledError"
E1216 20:32:13.885777   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-228964 stop: (3m1.724496071s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-228964 status: exit status 7 (96.103339ms)

                                                
                                                
-- stdout --
	multinode-228964
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-228964-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-228964 status --alsologtostderr: exit status 7 (84.651731ms)

                                                
                                                
-- stdout --
	multinode-228964
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-228964-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 20:33:14.470925   45142 out.go:345] Setting OutFile to fd 1 ...
	I1216 20:33:14.471058   45142 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:33:14.471074   45142 out.go:358] Setting ErrFile to fd 2...
	I1216 20:33:14.471081   45142 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:33:14.471316   45142 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-7083/.minikube/bin
	I1216 20:33:14.471476   45142 out.go:352] Setting JSON to false
	I1216 20:33:14.471503   45142 mustload.go:65] Loading cluster: multinode-228964
	I1216 20:33:14.471549   45142 notify.go:220] Checking for updates...
	I1216 20:33:14.472091   45142 config.go:182] Loaded profile config "multinode-228964": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 20:33:14.472117   45142 status.go:174] checking status of multinode-228964 ...
	I1216 20:33:14.472568   45142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 20:33:14.472631   45142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:33:14.487608   45142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43115
	I1216 20:33:14.488158   45142 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:33:14.488753   45142 main.go:141] libmachine: Using API Version  1
	I1216 20:33:14.488777   45142 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:33:14.489209   45142 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:33:14.489380   45142 main.go:141] libmachine: (multinode-228964) Calling .GetState
	I1216 20:33:14.491280   45142 status.go:371] multinode-228964 host status = "Stopped" (err=<nil>)
	I1216 20:33:14.491293   45142 status.go:384] host is not running, skipping remaining checks
	I1216 20:33:14.491298   45142 status.go:176] multinode-228964 status: &{Name:multinode-228964 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 20:33:14.491329   45142 status.go:174] checking status of multinode-228964-m02 ...
	I1216 20:33:14.491643   45142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 20:33:14.491683   45142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 20:33:14.506633   45142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35863
	I1216 20:33:14.507121   45142 main.go:141] libmachine: () Calling .GetVersion
	I1216 20:33:14.507697   45142 main.go:141] libmachine: Using API Version  1
	I1216 20:33:14.507722   45142 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 20:33:14.508028   45142 main.go:141] libmachine: () Calling .GetMachineName
	I1216 20:33:14.508242   45142 main.go:141] libmachine: (multinode-228964-m02) Calling .GetState
	I1216 20:33:14.509943   45142 status.go:371] multinode-228964-m02 host status = "Stopped" (err=<nil>)
	I1216 20:33:14.509958   45142 status.go:384] host is not running, skipping remaining checks
	I1216 20:33:14.509964   45142 status.go:176] multinode-228964-m02 status: &{Name:multinode-228964-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (181.91s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (116.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-228964 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-228964 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m55.986316385s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-228964 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (116.54s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (46.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-228964
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-228964-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-228964-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (66.769208ms)

                                                
                                                
-- stdout --
	* [multinode-228964-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20091
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20091-7083/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-7083/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-228964-m02' is duplicated with machine name 'multinode-228964-m02' in profile 'multinode-228964'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-228964-m03 --driver=kvm2  --container-runtime=crio
E1216 20:35:50.483562   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/functional-782219/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-228964-m03 --driver=kvm2  --container-runtime=crio: (45.231431498s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-228964
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-228964: exit status 80 (213.773443ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-228964 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-228964-m03 already exists in multinode-228964-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-228964-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (46.35s)

                                                
                                    
x
+
TestScheduledStopUnix (114.21s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-979258 --memory=2048 --driver=kvm2  --container-runtime=crio
E1216 20:40:50.486223   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/functional-782219/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-979258 --memory=2048 --driver=kvm2  --container-runtime=crio: (42.583140003s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-979258 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-979258 -n scheduled-stop-979258
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-979258 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1216 20:41:27.276346   14254 retry.go:31] will retry after 69.564µs: open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/scheduled-stop-979258/pid: no such file or directory
I1216 20:41:27.277498   14254 retry.go:31] will retry after 195.713µs: open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/scheduled-stop-979258/pid: no such file or directory
I1216 20:41:27.278645   14254 retry.go:31] will retry after 134.889µs: open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/scheduled-stop-979258/pid: no such file or directory
I1216 20:41:27.279782   14254 retry.go:31] will retry after 419.908µs: open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/scheduled-stop-979258/pid: no such file or directory
I1216 20:41:27.280921   14254 retry.go:31] will retry after 659.643µs: open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/scheduled-stop-979258/pid: no such file or directory
I1216 20:41:27.282047   14254 retry.go:31] will retry after 904.82µs: open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/scheduled-stop-979258/pid: no such file or directory
I1216 20:41:27.283200   14254 retry.go:31] will retry after 1.673041ms: open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/scheduled-stop-979258/pid: no such file or directory
I1216 20:41:27.285463   14254 retry.go:31] will retry after 1.431856ms: open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/scheduled-stop-979258/pid: no such file or directory
I1216 20:41:27.287673   14254 retry.go:31] will retry after 3.017651ms: open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/scheduled-stop-979258/pid: no such file or directory
I1216 20:41:27.290900   14254 retry.go:31] will retry after 5.577585ms: open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/scheduled-stop-979258/pid: no such file or directory
I1216 20:41:27.297182   14254 retry.go:31] will retry after 7.280959ms: open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/scheduled-stop-979258/pid: no such file or directory
I1216 20:41:27.305440   14254 retry.go:31] will retry after 8.743154ms: open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/scheduled-stop-979258/pid: no such file or directory
I1216 20:41:27.314791   14254 retry.go:31] will retry after 18.231901ms: open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/scheduled-stop-979258/pid: no such file or directory
I1216 20:41:27.334125   14254 retry.go:31] will retry after 19.274238ms: open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/scheduled-stop-979258/pid: no such file or directory
I1216 20:41:27.354429   14254 retry.go:31] will retry after 16.37531ms: open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/scheduled-stop-979258/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-979258 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-979258 -n scheduled-stop-979258
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-979258
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-979258 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1216 20:42:13.885864   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-979258
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-979258: exit status 7 (65.026729ms)

                                                
                                                
-- stdout --
	scheduled-stop-979258
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-979258 -n scheduled-stop-979258
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-979258 -n scheduled-stop-979258: exit status 7 (64.284938ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-979258" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-979258
--- PASS: TestScheduledStopUnix (114.21s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (221.09s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.240653462 start -p running-upgrade-546761 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.240653462 start -p running-upgrade-546761 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m9.977827063s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-546761 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-546761 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m29.354684937s)
helpers_test.go:175: Cleaning up "running-upgrade-546761" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-546761
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-546761: (1.204102582s)
--- PASS: TestRunningBinaryUpgrade (221.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-545724 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-545724 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (85.270225ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-545724] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20091
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20091-7083/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-7083/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (94.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-545724 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-545724 --driver=kvm2  --container-runtime=crio: (1m34.626560614s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-545724 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (94.88s)

                                                
                                    
x
+
TestPause/serial/Start (138.89s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-022944 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-022944 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (2m18.890834016s)
--- PASS: TestPause/serial/Start (138.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (47.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-545724 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-545724 --no-kubernetes --driver=kvm2  --container-runtime=crio: (45.783371963s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-545724 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-545724 status -o json: exit status 2 (248.0399ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-545724","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-545724
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-545724: (1.16605621s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (47.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (31.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-545724 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-545724 --no-kubernetes --driver=kvm2  --container-runtime=crio: (31.571374511s)
--- PASS: TestNoKubernetes/serial/Start (31.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-545724 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-545724 "sudo systemctl is-active --quiet service kubelet": exit status 1 (213.252041ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (30.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
E1216 20:45:33.552279   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/functional-782219/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (17.791481946s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
E1216 20:45:50.481794   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/functional-782219/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (12.656629401s)
--- PASS: TestNoKubernetes/serial/ProfileList (30.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-545724
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-545724: (1.316040914s)
--- PASS: TestNoKubernetes/serial/Stop (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (25.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-545724 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-545724 --driver=kvm2  --container-runtime=crio: (25.818143813s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (25.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-647112 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-647112 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (106.100924ms)

                                                
                                                
-- stdout --
	* [false-647112] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20091
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20091-7083/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-7083/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 20:46:22.705414   52675 out.go:345] Setting OutFile to fd 1 ...
	I1216 20:46:22.705526   52675 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:46:22.705535   52675 out.go:358] Setting ErrFile to fd 2...
	I1216 20:46:22.705540   52675 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 20:46:22.705699   52675 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-7083/.minikube/bin
	I1216 20:46:22.706250   52675 out.go:352] Setting JSON to false
	I1216 20:46:22.707178   52675 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5328,"bootTime":1734376655,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 20:46:22.707316   52675 start.go:139] virtualization: kvm guest
	I1216 20:46:22.709668   52675 out.go:177] * [false-647112] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1216 20:46:22.710960   52675 out.go:177]   - MINIKUBE_LOCATION=20091
	I1216 20:46:22.710966   52675 notify.go:220] Checking for updates...
	I1216 20:46:22.713143   52675 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 20:46:22.714335   52675 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20091-7083/kubeconfig
	I1216 20:46:22.715466   52675 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-7083/.minikube
	I1216 20:46:22.716653   52675 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 20:46:22.717824   52675 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 20:46:22.719447   52675 config.go:182] Loaded profile config "NoKubernetes-545724": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1216 20:46:22.719573   52675 config.go:182] Loaded profile config "kubernetes-upgrade-560677": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1216 20:46:22.719706   52675 config.go:182] Loaded profile config "pause-022944": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 20:46:22.719817   52675 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 20:46:22.757698   52675 out.go:177] * Using the kvm2 driver based on user configuration
	I1216 20:46:22.758942   52675 start.go:297] selected driver: kvm2
	I1216 20:46:22.758955   52675 start.go:901] validating driver "kvm2" against <nil>
	I1216 20:46:22.758967   52675 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 20:46:22.761135   52675 out.go:201] 
	W1216 20:46:22.762446   52675 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1216 20:46:22.763848   52675 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-647112 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-647112

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-647112

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-647112

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-647112

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-647112

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-647112

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-647112

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-647112

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-647112

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-647112

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647112"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647112"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647112"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-647112

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647112"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647112"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-647112" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-647112" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-647112" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-647112" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-647112" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-647112" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-647112" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-647112" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647112"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647112"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647112"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647112"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647112"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-647112" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-647112" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-647112" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647112"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647112"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647112"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647112"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647112"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 16 Dec 2024 20:45:10 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.72.189:8443
name: pause-022944
contexts:
- context:
cluster: pause-022944
extensions:
- extension:
last-update: Mon, 16 Dec 2024 20:45:10 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-022944
name: pause-022944
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-022944
user:
client-certificate: /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/pause-022944/client.crt
client-key: /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/pause-022944/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-647112

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647112"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647112"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647112"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647112"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647112"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647112"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647112"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647112"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647112"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647112"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647112"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647112"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647112"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647112"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647112"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647112"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647112"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647112"

                                                
                                                
----------------------- debugLogs end: false-647112 [took: 2.772739663s] --------------------------------
helpers_test.go:175: Cleaning up "false-647112" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-647112
--- PASS: TestNetworkPlugins/group/false (3.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-545724 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-545724 "sudo systemctl is-active --quiet service kubelet": exit status 1 (238.546464ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.53s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (161.88s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3845274288 start -p stopped-upgrade-976873 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3845274288 start -p stopped-upgrade-976873 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m49.741606939s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3845274288 -p stopped-upgrade-976873 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3845274288 -p stopped-upgrade-976873 stop: (2.366147899s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-976873 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-976873 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (49.771398144s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (161.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (107.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-232338 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-232338 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (1m47.805565163s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (107.81s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-976873
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (67.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-606219 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
E1216 20:50:50.481173   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/functional-782219/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-606219 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (1m7.743973994s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (67.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-606219 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fa3f7cd2-cd2d-473e-8dfb-10bd667497c4] Pending
helpers_test.go:344: "busybox" [fa3f7cd2-cd2d-473e-8dfb-10bd667497c4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fa3f7cd2-cd2d-473e-8dfb-10bd667497c4] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004655917s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-606219 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-606219 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-606219 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.011094919s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-606219 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-327790 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-327790 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (56.880343335s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-232338 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9f20cf00-154e-4dee-9889-d6dbcc1e38a7] Pending
helpers_test.go:344: "busybox" [9f20cf00-154e-4dee-9889-d6dbcc1e38a7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9f20cf00-154e-4dee-9889-d6dbcc1e38a7] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003932849s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-232338 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-232338 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-232338 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-327790 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [53dd437c-4baa-448d-bc55-5f30ec013bd7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [53dd437c-4baa-448d-bc55-5f30ec013bd7] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003985979s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-327790 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-327790 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-327790 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (671.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-606219 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-606219 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (11m11.594373896s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-606219 -n embed-certs-606219
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (671.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (624.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-232338 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-232338 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (10m24.566665671s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-232338 -n no-preload-232338
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (624.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (6.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-847766 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-847766 --alsologtostderr -v=3: (6.31947665s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (6.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (566.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-327790 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-327790 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (9m26.194850373s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-327790 -n default-k8s-diff-port-327790
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (566.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-847766 -n old-k8s-version-847766
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-847766 -n old-k8s-version-847766: exit status 7 (63.966797ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-847766 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.75s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-194530 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
E1216 21:18:53.556868   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/functional-782219/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-194530 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (47.747879013s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.75s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-194530 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-194530 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.16599936s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-194530 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-194530 --alsologtostderr -v=3: (7.355256296s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-194530 -n newest-cni-194530
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-194530 -n newest-cni-194530: exit status 7 (75.887274ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-194530 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-194530 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-194530 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (37.54008372s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-194530 -n newest-cni-194530
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (55.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-647112 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-647112 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (55.182375688s)
--- PASS: TestNetworkPlugins/group/auto/Start (55.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-194530 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-194530 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-194530 -n newest-cni-194530
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-194530 -n newest-cni-194530: exit status 2 (277.532689ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-194530 -n newest-cni-194530
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-194530 -n newest-cni-194530: exit status 2 (270.438688ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-194530 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-194530 -n newest-cni-194530
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-194530 -n newest-cni-194530
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (84.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-647112 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-647112 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m24.593539315s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (84.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (121.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-647112 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E1216 21:20:50.481337   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/functional-782219/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-647112 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (2m1.8495223s)
--- PASS: TestNetworkPlugins/group/calico/Start (121.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-647112 "pgrep -a kubelet"
I1216 21:21:13.264973   14254 config.go:182] Loaded profile config "auto-647112": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-647112 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context auto-647112 replace --force -f testdata/netcat-deployment.yaml: (1.290995981s)
I1216 21:21:14.871873   14254 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-9s966" [dcb2bfb2-845c-456f-a836-67c29fa6afbc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-9s966" [dcb2bfb2-845c-456f-a836-67c29fa6afbc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.005402174s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-647112 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-647112 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-647112 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (74.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-647112 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-647112 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m14.923602073s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (74.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-sq6fd" [5e9c8805-ef32-4a3c-b8bd-cb208ebf3f92] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006242785s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-647112 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (102.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-647112 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
I1216 21:21:55.753981   14254 config.go:182] Loaded profile config "kindnet-647112": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-647112 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m42.078796096s)
--- PASS: TestNetworkPlugins/group/flannel/Start (102.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-647112 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-7wvns" [9fd7ee83-6097-4995-ab75-30ee02b8a3f3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1216 21:21:56.963300   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-7wvns" [9fd7ee83-6097-4995-ab75-30ee02b8a3f3] Running
E1216 21:22:02.136048   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/client.crt: no such file or directory" logger="UnhandledError"
E1216 21:22:02.142461   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/client.crt: no such file or directory" logger="UnhandledError"
E1216 21:22:02.153982   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/client.crt: no such file or directory" logger="UnhandledError"
E1216 21:22:02.175553   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/client.crt: no such file or directory" logger="UnhandledError"
E1216 21:22:02.217343   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/client.crt: no such file or directory" logger="UnhandledError"
E1216 21:22:02.298822   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/client.crt: no such file or directory" logger="UnhandledError"
E1216 21:22:02.460735   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/client.crt: no such file or directory" logger="UnhandledError"
E1216 21:22:02.782317   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/client.crt: no such file or directory" logger="UnhandledError"
E1216 21:22:03.424608   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/client.crt: no such file or directory" logger="UnhandledError"
E1216 21:22:04.706375   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.133316096s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-647112 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-647112 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-647112 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (107.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-647112 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-647112 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m47.797184456s)
--- PASS: TestNetworkPlugins/group/bridge/Start (107.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-jzl66" [a447ba1a-bfbb-4f7d-a985-726ce09be792] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005266894s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-647112 "pgrep -a kubelet"
I1216 21:22:41.412573   14254 config.go:182] Loaded profile config "calico-647112": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-647112 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-mw86z" [98694a15-2b58-41ed-8abf-ac434d94ed5f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1216 21:22:43.115553   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-mw86z" [98694a15-2b58-41ed-8abf-ac434d94ed5f] Running
E1216 21:22:48.784787   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/client.crt: no such file or directory" logger="UnhandledError"
E1216 21:22:48.791293   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/client.crt: no such file or directory" logger="UnhandledError"
E1216 21:22:48.802734   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/client.crt: no such file or directory" logger="UnhandledError"
E1216 21:22:48.824206   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/client.crt: no such file or directory" logger="UnhandledError"
E1216 21:22:48.865714   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/client.crt: no such file or directory" logger="UnhandledError"
E1216 21:22:48.947260   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/client.crt: no such file or directory" logger="UnhandledError"
E1216 21:22:49.109466   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/client.crt: no such file or directory" logger="UnhandledError"
E1216 21:22:49.431311   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/client.crt: no such file or directory" logger="UnhandledError"
E1216 21:22:50.072801   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/client.crt: no such file or directory" logger="UnhandledError"
E1216 21:22:51.354615   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.181262723s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-647112 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-647112 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-647112 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-647112 "pgrep -a kubelet"
I1216 21:22:56.227752   14254 config.go:182] Loaded profile config "custom-flannel-647112": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-647112 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-kx6bz" [99288925-5664-48e5-b86b-72bf658edb13] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1216 21:22:59.038218   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/default-k8s-diff-port-327790/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-kx6bz" [99288925-5664-48e5-b86b-72bf658edb13] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.005090237s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-647112 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-647112 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-647112 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1216 21:23:08.052303   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/client.crt: no such file or directory" logger="UnhandledError"
E1216 21:23:08.060224   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/client.crt: no such file or directory" logger="UnhandledError"
E1216 21:23:08.071815   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/client.crt: no such file or directory" logger="UnhandledError"
E1216 21:23:08.093262   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/client.crt: no such file or directory" logger="UnhandledError"
E1216 21:23:08.135053   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/client.crt: no such file or directory" logger="UnhandledError"
E1216 21:23:08.216928   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (83.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-647112 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E1216 21:23:13.186933   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/client.crt: no such file or directory" logger="UnhandledError"
E1216 21:23:18.308412   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/client.crt: no such file or directory" logger="UnhandledError"
E1216 21:23:24.082030   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/no-preload-232338/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-647112 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m23.908521164s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (83.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-ctkrv" [1c29d8d7-1c27-42da-ab8b-531677221da7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.085641164s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-647112 "pgrep -a kubelet"
I1216 21:23:44.179009   14254 config.go:182] Loaded profile config "flannel-647112": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-647112 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-ndjzx" [5cc2cf1d-c585-425e-887e-7be80ef06be4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1216 21:23:49.031290   14254 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/old-k8s-version-847766/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-ndjzx" [5cc2cf1d-c585-425e-887e-7be80ef06be4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004395119s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-647112 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-647112 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-647112 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-647112 "pgrep -a kubelet"
I1216 21:24:13.520658   14254 config.go:182] Loaded profile config "bridge-647112": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-647112 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-xv9z7" [75c85bc4-8350-44b7-a07c-a37d67dca973] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-xv9z7" [75c85bc4-8350-44b7-a07c-a37d67dca973] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004405638s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-647112 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-647112 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-647112 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-647112 "pgrep -a kubelet"
I1216 21:24:34.915342   14254 config.go:182] Loaded profile config "enable-default-cni-647112": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-647112 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-skfcd" [956b9b24-bf7b-4464-898a-7152afe6a942] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-skfcd" [956b9b24-bf7b-4464-898a-7152afe6a942] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004173591s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-647112 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-647112 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-647112 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    

Test skip (39/314)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.32.0/cached-images 0
15 TestDownloadOnly/v1.32.0/binaries 0
16 TestDownloadOnly/v1.32.0/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.31
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
117 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
124 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
127 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
129 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestGvisorAddon 0
178 TestImageBuild 0
205 TestKicCustomNetwork 0
206 TestKicExistingNetwork 0
207 TestKicCustomSubnet 0
208 TestKicStaticIP 0
240 TestChangeNoneUser 0
243 TestScheduledStopWindows 0
245 TestSkaffold 0
247 TestInsufficientStorage 0
251 TestMissingContainerUpgrade 0
260 TestStartStop/group/disable-driver-mounts 0.15
273 TestNetworkPlugins/group/kubenet 3.04
281 TestNetworkPlugins/group/cilium 3.41
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.31s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-618388 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.31s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-384008" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-384008
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-647112 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-647112

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-647112

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-647112

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-647112

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-647112

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-647112

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-647112

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-647112

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-647112

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-647112

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647112"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647112"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647112"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-647112

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647112"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647112"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-647112" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-647112" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-647112" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-647112" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-647112" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-647112" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-647112" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-647112" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647112"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647112"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647112"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647112"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647112"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-647112" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-647112" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-647112" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647112"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647112"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647112"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647112"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647112"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 16 Dec 2024 20:45:10 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.72.189:8443
name: pause-022944
contexts:
- context:
cluster: pause-022944
extensions:
- extension:
last-update: Mon, 16 Dec 2024 20:45:10 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-022944
name: pause-022944
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-022944
user:
client-certificate: /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/pause-022944/client.crt
client-key: /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/pause-022944/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-647112

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647112"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647112"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647112"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647112"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647112"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647112"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647112"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647112"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647112"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647112"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647112"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647112"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647112"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647112"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647112"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647112"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647112"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647112"

                                                
                                                
----------------------- debugLogs end: kubenet-647112 [took: 2.889858549s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-647112" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-647112
--- SKIP: TestNetworkPlugins/group/kubenet (3.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-647112 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-647112

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-647112

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-647112

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-647112

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-647112

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-647112

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-647112

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-647112

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-647112

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-647112

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647112"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647112"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647112"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-647112

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647112"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647112"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-647112" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-647112" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-647112" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-647112" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-647112" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-647112" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-647112" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-647112" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647112"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647112"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647112"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647112"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647112"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-647112

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-647112

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-647112" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-647112" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-647112

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-647112

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-647112" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-647112" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-647112" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-647112" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-647112" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647112"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647112"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647112"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647112"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647112"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 16 Dec 2024 20:45:10 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.72.189:8443
name: pause-022944
contexts:
- context:
cluster: pause-022944
extensions:
- extension:
last-update: Mon, 16 Dec 2024 20:45:10 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-022944
name: pause-022944
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-022944
user:
client-certificate: /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/pause-022944/client.crt
client-key: /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/pause-022944/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-647112

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647112"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647112"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647112"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647112"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647112"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647112"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647112"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647112"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647112"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647112"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647112"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647112"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647112"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647112"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647112"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647112"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647112"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-647112" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647112"

                                                
                                                
----------------------- debugLogs end: cilium-647112 [took: 3.247544592s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-647112" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-647112
--- SKIP: TestNetworkPlugins/group/cilium (3.41s)

                                                
                                    
Copied to clipboard